Amazon Aurora Pricing: Complete Cost Breakdown (2026)

Aurora has 6 billing dimensions. Most teams estimate 2. Complete 2026 breakdown of every cost, including hidden fees, with a free calculator.

March 17th, 2026
0 views
--- likes

Aurora pricing is genuinely confusing - not because AWS hides anything, but because there are six independent billing dimensions and most documentation only explains two of them clearly. Estimate compute and storage, ignore the rest, and you'll get an unpleasant surprise when the bill arrives.

Here's the cost range you came for: a minimal Aurora setup (single db.t3.medium, us-east-1) runs around $50-70/month. A small production cluster - db.r6g.large writer plus one reader, 100 GB storage, moderate I/O - lands in the $400-600/month range. Scale up to a multi-reader setup with heavy OLTP traffic and you're looking at $900+/month before data transfer and backup costs.

DSQL pricing and RDS Extended Support figures in this article reflect pricing as of March 2026, making this over a year more current than any other independent guide on this topic.

By the end of this breakdown, you'll be able to estimate every line item on your Aurora bill, identify the configuration decisions that have the largest cost impact, and know exactly which charges to watch for before they appear. Use the Aurora Pricing Calculator to model your specific configuration alongside this guide.

What Does Aurora Actually Cost? (Quick Reference)

Before getting into the mechanics, a few reference scenarios to anchor your expectations.

ScenarioSetupEstimated Monthly Cost
Dev/testdb.t3.medium, 20 GB storage, Aurora Standard~$50-70
Small productiondb.r6g.large writer + 1 reader, 100 GB, moderate I/O~$400-600
Production with replicasdb.r6g.xlarge writer + 2 readers, 500 GB, heavy OLTP~$900-1,400+
Aurora DSQL (light workload)Single region, 1.3M DPUs/month, 15 GB storage~$15

These are ballpark figures. The actual number depends heavily on your I/O patterns - which is the variable most teams underestimate. All prices listed are for us-east-1 unless otherwise noted. Verify current rates at the official Aurora pricing page before making infrastructure decisions.

Aurora Is Not in the AWS Free Tier

This is worth addressing directly because Pump.co incorrectly states that Aurora qualifies for the RDS Free Tier. It does not.

The RDS Free Tier covers 750 hours/month of db.t3.micro or db.t4g.micro Single-AZ instances running RDS MySQL, MariaDB, PostgreSQL, or SQL Server Express - not Aurora. Aurora has never been part of that offering.

The one exception is Aurora DSQL, which has an ongoing free tier: 100,000 DPUs and 1 GB of storage per month, with no 12-month time limit. That's sufficient for personal projects and development use. More on DSQL later.

To understand why Aurora costs vary so widely, it helps to know which part of the bill each configuration decision affects.

How Aurora Pricing Works: The Architecture That Drives Your Bill

Aurora separates compute from storage at the architecture level. This is not just a product distinction - it is the reason your bill has independent line items for instances, storage, and I/O that scale independently of each other.

Your Aurora cluster has two components: DB instances (the compute heads that execute queries) and a cluster volume (the shared storage layer where all data lives). Instances hold no persistent data themselves. Every instance in the cluster reads from and writes to the same shared volume.

The storage layer replicates data across three Availability Zones using six copies automatically. Despite six copies existing, you pay for only one logical copy per region. This is one of the genuine advantages of Aurora's architecture over self-managed MySQL or PostgreSQL replication.

Compute and Storage Are Billed Separately

This separation has four key billing implications:

  1. You pay for compute (DB instances) separately from storage
  2. You pay for I/O operations on the storage layer (with Aurora Standard), separate from compute
  3. Adding a reader instance adds compute cost but not storage cost
  4. Stopping an instance eliminates compute charges but storage and backup costs continue

Standard vs I/O-Optimized: Your First Major Cost Decision

Aurora offers two cluster storage configurations that fundamentally change how you pay for I/O. Aurora Standard charges per-million I/O requests on top of a lower storage rate. Aurora I/O-Optimized eliminates I/O charges entirely at a higher storage rate. This single decision can swing your total Aurora cost by up to 40%. The detailed math is in the storage and I/O section below.

There are four compute pricing models: Provisioned On-Demand, Provisioned Reserved Instances, Serverless v2 (ACU-based), and Aurora DSQL (DPU-based, a separate product entirely).

Now let's work through each billing dimension, starting with compute.

Aurora Instance Pricing (Compute Costs)

Compute is typically the largest line item for provisioned clusters, but it is also the most predictable. You know exactly how many instances you are running and for how long.

On-Demand Instance Pricing

Aurora On-Demand instances bill per DB instance-hour in 1-second increments, with a minimum charge of 10 minutes after any billable status change (create, start, or modify). There are no long-term commitments.

A few specifics worth knowing:

  • You pay for every running instance simultaneously. A writer plus two readers means three instances billing in parallel.
  • Stopped instances do not charge for instance hours, but storage and backup costs continue.
  • Prices vary by instance class, region, and cluster configuration (Standard vs I/O-Optimized).

For a concrete anchor: a db.r6i.large in us-east-1 costs $0.29/hour on Aurora Standard, or $0.377/hour on Aurora I/O-Optimized. At 730 hours/month (the standard billing month), that is $211.70 for a single r6i.large on Standard - before storage, I/O, or backup costs are counted.

For specific instance prices, the official Aurora pricing page has the full matrix. Prices are region-specific and not listed comprehensively here because they change.

Instance Families and the Graviton Advantage

Aurora supports three broad instance families:

  • Memory-optimized (r-series): r6i, r6g, r7g, x2g - the right choice for most production database workloads
  • Burstable (t-series): t3, t4g - cost-effective for dev/test; carry a real risk in production (see below)
  • Optimized Reads (Aurora PostgreSQL only): r6gd, r6id - adds local NVMe SSD for tiered caching at no extra cost for the feature itself

Graviton3 instances (db.r7g, db.r6g, db.t4g) deliver better price-performance than Intel equivalents at equivalent or lower hourly rates. AWS Compute Optimizer actively recommends Graviton migrations when your workload profile supports it. If you are running r6i today and haven't evaluated r7g, that is a straightforward opportunity to reduce compute costs.

Burstable Instances and the CPU Credit Trap

T4g and T3 Aurora instances run in Unlimited mode by default. This means if your average CPU utilization exceeds the baseline over a rolling 24-hour period, you get charged for CPU credits at $0.09 per vCPU-hour.

This rate is not covered by Reserved Instances and applies to both Aurora Standard and I/O-Optimized. It is also not capped. A t3.medium running consistently at 80% CPU on a workload with a 20% baseline can accumulate CPU credit charges that dwarf the base instance cost.

Burstable instances are fine for development databases with light, intermittent traffic. For production workloads with real user traffic, run Compute Optimizer's idle/over-provisioned analysis before committing to t-series. The credit charges that appear after a few weeks of sustained load are a common surprise.

Compute is predictable. The next dimension - storage and I/O - is where most Aurora surprise bills originate.

Aurora Storage and I/O Pricing

The Standard vs I/O-Optimized decision is the single highest-impact configuration choice you make for an Aurora cluster. Get it right and you can reduce total cost by up to 40%. Get it wrong and you will overpay on storage or overpay on I/O - depending on which direction you chose incorrectly.

Aurora Standard: Storage + Per-Million I/O Charges

Aurora Standard in us-east-1 charges:

  • Storage: $0.10/GB-month, auto-scaling with no pre-provisioning required
  • I/O: $0.20 per million I/O requests (reads and writes both charged)

I/O is measured at the storage layer, not the SQL layer. Each 4 KB write page or 16 KB read page at the storage layer counts as one I/O. A single SQL UPDATE touching multiple pages generates multiple I/O charges.

For read-heavy OLTP workloads, this adds up fast. A moderate production database generating 500 million I/Os per month accumulates $100 in I/O charges alone - on top of whatever your instances cost. High-write workloads can hit I/O costs that exceed compute costs entirely.

Aurora I/O-Optimized: Higher Storage Rate, Zero I/O Charges

Aurora I/O-Optimized eliminates per-request I/O charges entirely. The tradeoff:

  • Storage: $0.225/GB-month (2.25x the Standard rate)
  • Instance pricing: approximately 30% higher per normalized unit (db.r6i.large goes from $0.29/hr to $0.377/hr in us-east-1)
  • I/O charges: $0 for all reads and writes

One important exception: replicated write I/O charges for Aurora Global Database still apply even under I/O-Optimized. This only affects Global Database users, but it is a common misunderstanding.

Switching between Standard and I/O-Optimized is a cluster-level configuration change. No data migration required.

When to Switch to I/O-Optimized (the 25% Rule with Real Math)

The threshold rule: if your I/O spend exceeds 25% of your total Aurora bill, I/O-Optimized reduces total cost.

Here's the worked math from the research. Scenario: 1,000 GB database growing ~2% daily (20 GB/day), 350 read pages/sec (16 KB), 100 write pages/sec (4 KB), 30 days:

ConfigurationStorage CostI/O CostTotal
Aurora Standard$129.00$233.28$362.28
Aurora I/O-Optimized$290.25$0.00$290.25

For this I/O-heavy workload, I/O-Optimized delivers 19.8% savings on storage and I/O costs alone - before accounting for the compute cost difference.

For a low-I/O workload, the math flips. If you are paying $200/month in compute and $10/month in I/O (5% of total), the higher storage and instance rates of I/O-Optimized make it more expensive.

Plug your actual storage size and I/O rate into the Aurora Pricing Calculator to see whether Standard or I/O-Optimized is cheaper for your specific workload.

How to check your actual I/O percentage: Cost Explorer, filter by Amazon RDS, group by Usage Type, look for RDS:StorageIOUsage as a share of total Aurora charges. If it is above 25%, switch.

Aurora MySQL vs Aurora PostgreSQL: Does the Engine Affect Your Bill?

Yes, and in a few specific ways that competitors rarely discuss.

Instance pricing: Aurora PostgreSQL instances typically run approximately 5-10% higher than MySQL equivalents for the same instance class. The gap varies by instance type and region, so verify against the current pricing page for your specific configuration.

I/O behavior: PostgreSQL generates more write I/O due to full-page writes and autovacuum background processes. If I/O cost is a concern and both engines work for your use case, MySQL typically generates fewer write I/Os under equivalent workloads.

Engine-specific features with cost implications:

  • Backtrack (MySQL only): requires paying an hourly rate for storing change records. There's no equivalent for PostgreSQL.
  • Optimized Reads (PostgreSQL only): instances like db.r6gd and db.r6id include local NVMe caching at no extra cost for the feature, enabling up to 8x improved query latency and up to 30% cost savings on read-heavy I/O-intensive workloads. I/O-Optimized configuration is required for tiered caching specifically.
  • Fast DDL (MySQL): allows schema changes that avoid I/O overhead, reducing I/O charges for schema-heavy workflows.

None of this should drive engine selection by itself - compatibility and developer familiarity matter more. But if you're already choosing and I/O costs are a concern, MySQL has a slight edge.

If your workload has unpredictable demand or you're managing dev/test environments, Aurora Serverless v2 changes the cost equation significantly.

Aurora Serverless v2 Pricing

Serverless v2 is an autoscaling instance type within a standard Aurora cluster. It scales by the second based on actual load, which makes it cost-effective for variable workloads and genuinely convenient for dev/test environments.

Understanding when it saves money versus when it costs more than provisioned requires a bit of math.

How ACU Pricing Works

Aurora Capacity Units (ACUs) are the billing metric for Serverless v2. One ACU contains approximately 2 GiB of memory with corresponding CPU and networking.

Pricing in us-east-1:

  • Aurora Standard: $0.12/ACU-hour
  • Aurora I/O-Optimized: $0.156/ACU-hour

Capacity range: minimum 0.5 ACU (or 0 with automatic pause), maximum 256 ACU (512 GiB memory). Scaling happens in fine-grained increments measured per second, so you pay only for the capacity actually consumed.

One thing that trips people up in CDK and CloudFormation: Serverless v2 uses engineMode: provisioned, not serverless. The serverless engine mode is for the older Serverless v1. If you see examples online using engineMode: serverless, they are describing v1.

Aurora I/O-Optimized is compatible with Serverless v2 - you can run an all-Serverless v2 cluster on I/O-Optimized. The higher ACU-hour rate ($0.156 vs $0.12) applies, and you get zero I/O charges.

Scale-to-Zero for Dev/Test Environments

Set the minimum ACU to 0 to enable automatic pause. When the cluster has no active connections, compute charges drop to zero. Storage costs continue, but for a small development database, that might be $1-5/month.

Resume time is approximately 15 seconds - acceptable for dev/test environments, not for production applications with users.

Instances will NOT auto-pause when any of these conditions apply:

  • User-initiated connections are active
  • Logical replication is enabled
  • RDS Proxy is attached
  • The cluster is a primary or secondary in a Global Database

In CDK, set serverlessV2MinCapacity: 0 to enable auto-pause. Set it to 0.5 for production workloads where you want a hot minimum.

When Serverless v2 Costs More Than Provisioned

Here's the math nobody tells you. Serverless v2 has no Reserved Instance equivalent - you cannot purchase reserved capacity for ACU-hours. For sustained high-utilization workloads, provisioned plus Reserved Instances will beat Serverless v2 on total cost.

The calculation: a workload running at 10 ACUs for 8 hours and 2 ACUs for the remaining 16 hours per day consumes 112 ACU-hours per day with Serverless v2 ((10 x 8) + (2 x 16)). A provisioned instance sized to peak runs at full cost for all 24 hours.

If your workload runs at near-peak capacity more than 60-70% of the time, provisioned plus a 1-year Reserved Instance becomes cheaper once you factor in the RI discount.

The hybrid pattern that works well in practice: provisioned writer (stable baseline, RI-eligible) plus Serverless v2 readers (flexible read scaling, no RI needed for the variable portion). You get predictable write performance with cost-flexible read capacity.

Beyond compute and storage, Aurora has several additional billing dimensions that appear as separate line items on your bill.

Additional Billing Dimensions

Most of these are smaller line items, but some can surprise you at scale.

Data Transfer Costs (the Free vs Charged Tiers)

Aurora data transfer has more free tiers than most people expect, and one charged tier that catches teams off guard.

Free:

  • Data in from the internet or other AWS services
  • Same-AZ transfers between EC2 and Aurora
  • Inter-AZ replication for DB cluster internal replication (the storage layer replication is free)

Charged:

  • EC2 and Aurora in different AZs within the same region: EC2 Regional Data Transfer rates apply
  • Cross-region transfers: standard AWS data transfer rates at source and destination
  • Data out to the internet: standard rates (first 100 GB/month free, aggregated across all services)

The important nuance: "inter-AZ replication is free" applies only to Aurora's internal cluster replication. If your EC2 application is in us-east-1a and your Aurora writer is in us-east-1b, every query incurs cross-AZ data transfer charges. More on this in the hidden costs section.

Backup Storage

Aurora backup pricing is more generous than most people assume:

  • Free: up to 100% of your provisioned cluster size in automated backup storage
  • Free: all snapshots created within the active backup retention period
  • Charged: backup storage beyond 100% of cluster size
  • Charged: snapshots from deleted clusters

The last point is the trap. When you delete a cluster, manual snapshots are not automatically deleted. See the hidden costs section for how to avoid ongoing charges from orphaned snapshots.

Aurora Global Database

Global Database lets a single Aurora cluster span multiple AWS regions for low-latency local reads and cross-region disaster recovery. The additional charges beyond per-region standard pricing:

  • Replicated write I/Os: $0.20 per million replicated write I/Os (us-east-1 to us-west-2 rate)
  • Each secondary region bills its own instance hours and storage at normal rates
  • Cross-region data transfer at standard AWS rates

A real example from the research: primary in us-east-1 (2 db.r6i.large writers) plus secondary in us-west-2 (1 db.r6i.large reader), 80 GB storage, 45M write I/Os and 5M read I/Os per month - total monthly cost of $673.88. That's the price of active-active multi-region resilience on provisioned instances.

Note for Global Database users on I/O-Optimized: the zero I/O charges apply to local reads and writes, but replicated write I/O charges between regions still apply under I/O-Optimized.

For multi-account and multi-region cost strategies, the approach to multi-account cost savings is worth reading alongside Global Database cost planning.

Data API, Optimized Reads, and Aurora Limitless

Data API provides a secure HTTPS interface for Aurora queries without network configuration. Pricing:

  • $0.35 per million API requests (first 1 billion/month)
  • $0.20 per million API requests above 1 billion/month
  • 32 KB billing unit - payloads over 32 KB are rounded up to the next increment, each charged as an additional request
  • Free tier: 1 million requests/month for the first year
  • Additional: Secrets Manager charges apply for authentication

Optimized Reads (Aurora PostgreSQL only): no incremental cost. The instance price includes NVMe caching. Tiered caching requires I/O-Optimized configuration specifically.

Aurora PostgreSQL Limitless: a horizontal scaling capability for workloads requiring millions of write TPS. It bills on ACU-hours within a DB shard group. This is a niche capability with separate pricing - check the Aurora pricing page if you are evaluating it.

The billing dimensions above are all listed on the Aurora pricing page. The next section covers costs that do not appear as prominently - the ones that drive community frustration.

Hidden Costs That Surprise Aurora Teams

These are the charges that generate "why is my Aurora bill $400 higher than expected?" posts. A cost optimization assessment almost always surfaces at least one of these in Aurora-heavy environments.

I/O Bills That Exceed Your Compute Cost

This is the most common surprise. For OLTP workloads with frequent small transactions, I/O is often the largest single line item on the bill - exceeding even instance costs.

At $0.20 per million I/Os, a workload generating 500 million I/Os per month produces $100/month in I/O charges. A busy production database can easily hit 1-2 billion I/Os per month. That's $200-400/month in I/O alone on top of compute.

How to check: Cost Explorer, filter by Amazon RDS, group by Usage Type, find RDS:StorageIOUsage as a percentage of total Aurora charges. If it is above 25%, switch to I/O-Optimized.

For raw I/O numbers from CloudWatch:

aws cloudwatch get-metric-statistics \
  --namespace AWS/RDS \
  --metric-name VolumeReadIOPs \
  --dimensions Name=DbClusterIdentifier,Value=YOUR_CLUSTER_ID \
  --start-time 2026-03-01T00:00:00Z \
  --end-time 2026-03-31T23:59:59Z \
  --period 2592000 \
  --statistics Sum

aws cloudwatch get-metric-statistics \
  --namespace AWS/RDS \
  --metric-name VolumeWriteIOPs \
  --dimensions Name=DbClusterIdentifier,Value=YOUR_CLUSTER_ID \
  --start-time 2026-03-01T00:00:00Z \
  --end-time 2026-03-31T23:59:59Z \
  --period 2592000 \
  --statistics Sum

Snapshots That Survive After Cluster Deletion

When you delete an Aurora cluster, manual snapshots are NOT automatically deleted. They continue billing at standard backup storage rates indefinitely.

This is particularly common in dev/test environments where clusters are spun up and down frequently. Create a cluster for a sprint, delete it when done, forget about the manual snapshot taken before deletion - and pay for it for the next 18 months.

How to find orphaned snapshots:

aws rds describe-db-cluster-snapshots \
  --snapshot-type manual \
  --query 'DBClusterSnapshots[*].[DBClusterSnapshotIdentifier,DBClusterIdentifier,SnapshotCreateTime,AllocatedStorage]' \
  --output table

Look for snapshot cluster identifiers that no longer match active clusters. Prevention: tag snapshots with cluster lifecycle metadata, and automate cleanup as part of any cluster deletion script.

Cross-AZ Data Transfer on EC2-to-Aurora Traffic

Aurora's internal cluster replication across Availability Zones is free. Application traffic between EC2 and Aurora in different AZs is not.

If your EC2 application runs in us-east-1a and your Aurora writer is in us-east-1b, every query incurs cross-AZ data transfer at EC2 Regional Data Transfer rates (~$0.01/GB in each direction). At 100 GB/month of application-to-database traffic, that adds $2/month. Not catastrophic, but it compounds - and at 1 TB/month, you are paying $20/month in transfer costs that could be zero.

The fix: deploy your application and Aurora writer in the same AZ when latency and cost matter more than AZ distribution. Or accept the cost as the price of Multi-AZ application resilience.

RDS Extended Support for Old Engine Versions

This one can add hundreds of dollars per month with no warning if you have not planned for it.

Aurora PostgreSQL 12 reached community end-of-life on February 28, 2025. If you are running Aurora PostgreSQL 12 today, you are in Extended Support and being charged:

  • Year 1-2 (March 1, 2025 - February 28, 2027): $0.10 per vCPU-hour
  • Year 3 (starting March 1, 2027): $0.20 per vCPU-hour

On a db.r6g.2xlarge (8 vCPUs), that is $0.10 x 8 x 730 hours = $584/month in Extended Support fees alone. On top of normal instance costs.

Aurora MySQL users get more grace - at least 1 year after community end-of-life before Extended Support charges begin.

Fix: plan and execute major version upgrades before EOL dates. In CloudFormation and CDK, set EngineLifecycleSupport to open-source-rds-extended-support-disabled. This causes deployments to fail rather than silently enter Extended Support - you want to know about it before it happens.

Blue/Green Deployment Temporary Cost Doubling

Aurora Blue/Green deployments create a complete copy of your production cluster for zero-downtime major version upgrades. During the deployment window, you run two full clusters simultaneously.

For a $600/month production cluster, a 3-day Blue/Green window adds approximately $60 in temporary costs. Plan for it, and keep the window as short as possible. Have automation ready to delete the old cluster immediately after the switchover confirms success.

Backtrack, Zero-ETL Overhead, and Enhanced Monitoring Logs

Three smaller but real costs that accumulate:

Backtrack (Aurora MySQL only): you pay an hourly rate for storing change records for the configured backtrack window. The exact rate is on the Aurora pricing page (dynamically rendered - check current pricing). If you have set a 24-hour backtrack window and never use the feature, you are paying for change record storage continuously.

Zero-ETL integrations: AWS charges no additional fee for the Zero-ETL integration itself between Aurora and Redshift or SageMaker. However, enabling enhanced binlog increases write I/O significantly, and the initial setup triggers a snapshot export at $0.010/GB (full snapshot size charged, not just the exported table). On a 100 GB database, that is $1.00 for the first export, and $1.00 every subsequent full export.

Enhanced Monitoring: publishes metrics to CloudWatch Logs at 1-60 second granularity. CloudWatch Logs ingestion and storage charges apply. For a cluster where you have enabled 1-second Enhanced Monitoring granularity across writer and multiple readers, this can add $10-30/month depending on your CloudWatch configuration.

RDS Proxy: if you use RDS Proxy for connection pooling, add $0.015/vCPU-hour per proxy endpoint.

Aurora DSQL represents a completely different billing model - worth understanding if you are designing a new application or evaluating options for multi-region workloads.

Aurora DSQL Pricing: The Serverless Alternative Built for Multi-Region

Aurora DSQL is a fully serverless, distributed SQL database with PostgreSQL compatibility. It is a separate product from Aurora MySQL/PostgreSQL with its own pricing page and billing model. Nothing about DSQL pricing maps directly to ACU-hours or storage I/O charges.

No competitor has covered DSQL pricing independently. If you have seen it mentioned elsewhere, it was a passing reference.

How DPU Billing Works

DSQL bills on Distributed Processing Units (DPUs). A DPU is a unified billing metric that combines compute AND I/O into a single number. This is the fundamental difference from standard Aurora - there is no separate I/O line item.

DPU sub-components (visible in CloudWatch for cost visibility):

  • ComputeDPU: query execution (joins, functions, aggregations)
  • ReadDPU: reads from storage
  • WriteDPU: writes to storage
  • MultiRegionWriteDPU: cross-region replication writes, charged equal to originating write DPUs

Pricing in us-east-1 and us-east-2: $8.00 per million DPUs. Storage: $0.33/GB-month.

DSQL scales to zero when idle. No DPU charges when the cluster has no activity.

For multi-region clusters, each region bills independently. Cross-region write replication generates MultiRegionWriteDPU charges in the source region equal to the originating write DPUs. There are no separate data transfer charges for inter-region replication - that cost is embedded in the DPU charges.

DSQL vs Aurora Serverless v2: Which Is Cheaper?

Two real examples from Aurora's pricing documentation:

Single-region gaming application (avg 0.5 DPU/sec, peak 2 DPU/sec, idle 0.1 DPU/sec, 15 GB storage):

  • Total monthly DPUs: 1.314M (0.7M Write + 0.4M Read + 0.214M Compute)
  • Monthly bill: (1.314M x $8/M) + (15 GB x $0.33) = $10.51 + $4.95 = $15.46

Multi-region banking application (two regions, 50 GB total storage):

  • Region 1 (N. Virginia): 6M DPUs (including 2M MultiRegionWrite), 25 GB storage
  • Region 2 (Ohio): 2.312M DPUs, 25 GB storage
  • Monthly bill: (6M x $8/M) + (2.312M x $8/M) + (50 GB x $0.33) = $48 + $18.50 + $16.50 = $83.00

To compare against Serverless v2 for the same single-region workload: if the gaming app averaged 2 ACUs, that is 2 x $0.12 x 720 = $172.80/month on Serverless v2 Standard. DSQL's DPU model scales more finely for very spiky or intermittent workloads where ACU minimums create overhead.

A few decision criteria:

DSQL wins when: you need active-active multi-region (DSQL was designed for this), you have very spiky or near-zero traffic where even 0.5 ACU minimum creates overhead, or you want zero connection management complexity.

Serverless v2 wins when: your workload is single-region, you need PostgreSQL extensions or stored procedures that DSQL does not support, or you want the full Aurora feature set. DSQL does not support all PostgreSQL features - evaluate compatibility before choosing.

If you are evaluating DSQL as an alternative to DynamoDB for relational workloads, the DynamoDB pricing breakdown covers the on-demand and provisioned pricing models for comparison.

One important note on discounts: both DSQL and Serverless v2 are eligible for Database Savings Plans (up to 35% savings with a 1-year commitment). Neither is eligible for Reserved Instances. This means Database Savings Plans are your only discount path for both serverless Aurora options.

The DSQL Free Tier

DSQL has the only meaningful free tier in the Aurora product family:

  • 100,000 DPUs per month: free, ongoing, no 12-month expiration
  • 1 GB of storage per month: free, ongoing

At 100,000 free DPUs per month, a small workload averaging 0.04 DPU/sec stays within the free tier. That is sufficient for personal projects, development databases, or low-traffic internal applications.

This is not the case for provisioned Aurora or Serverless v2. Neither has a free tier of any kind.

Understanding what Aurora costs is the first step. Reducing what Aurora costs is where the real savings come from.

How to Cut Your Aurora Bill

Aurora cost optimization is not complicated, but it does require knowing which lever to pull for your specific situation.

The AWS cost optimization maturity framework puts database pricing model analysis at the center of FinOps maturity - and Aurora is one of the highest-impact services to optimize because the cost levers have large multipliers.

Reserved Instances vs Database Savings Plans

Both options reduce Aurora compute costs versus On-Demand. The right choice depends on how certain you are about your workload's configuration over the next 1-3 years.

Reserved Instances:

  • 1-year term: up to 45% discount over On-Demand
  • 3-year term: up to 66% discount over On-Demand
  • Size-flexible within the same instance family and region (a db.r6g.large RI also covers partial usage of a db.r6g.xlarge)
  • Locked to a specific instance family and region
  • Covers provisioned instances only - Serverless v2 ACU-hours are not eligible
  • Up to 40 RIs purchasable per account

If you switch from Standard to I/O-Optimized after purchasing Standard RIs: I/O-Optimized consumes 30% more normalized units per hour than Standard. You need to purchase approximately 30% additional RIs to maintain full RI coverage. For example, 10 db.r6g.large Standard RIs would need 3 additional db.r6g.large RIs for I/O-Optimized coverage.

Database Savings Plans:

  • 1-year term: up to 35% discount
  • No upfront payment required
  • Applies automatically across engines, instance families, sizes, regions, and deployment options
  • Covers Aurora, RDS, Aurora Serverless v2, Aurora DSQL, DynamoDB, ElastiCache, DocumentDB, Neptune, and more
  • Cannot be combined with Reserved Instances for the same workload

The all four AWS Savings Plans types guide covers the broader context. For Aurora specifically, the decision is:

Choose RIs when your workload is stable - same engine, same region, same instance family for 1-3 years. Choose Database Savings Plans when you expect changes or when you want a single discount program covering multiple database services.

Right-Sizing with AWS Compute Optimizer

Compute Optimizer analyzes CloudWatch CPU utilization, network throughput, and DB connections to categorize each instance:

  • Optimized: sized appropriately for the workload
  • Over-provisioned: excess capacity that could be reduced
  • Under-provisioned: insufficient resources
  • Idle: candidates for stopping, converting to Serverless v2, or deletion

Enable Performance Insights (CloudWatch Database Insights) for deeper analysis including DBLoad, swap utilization, and memory kill counts. As of June 2025, Cost Optimization Hub integrates Aurora recommendations across organizational member accounts, quantifying potential savings while accounting for existing RIs and Savings Plans.

The Compute Optimizer blog post from AWS walks through the exact metric thresholds used for each category if you want to understand the recommendations.

How to Audit Your Current Aurora Costs in Cost Explorer

A systematic five-step audit identifies where your money is actually going:

Step 1: Cost Explorer, Group by "Usage Type", filter Service = "Amazon RDS". Look for Aurora-prefixed usage types to isolate Aurora from RDS.

Step 2: Identify RDS:StorageIOUsage as a percentage of total Aurora spend. Above 25%: switch to I/O-Optimized. Below 25%: Standard is cost-optimal.

Step 3: Run the CloudWatch I/O metric commands above to get actual VolumeReadIOPs and VolumeWriteIOPs over the past 30 days. Cross-check with Cost Explorer to confirm the estimate.

Step 4: Check for orphaned manual snapshots from deleted clusters. Use aws rds describe-db-cluster-snapshots --snapshot-type manual and look for snapshots whose cluster identifiers no longer exist.

Step 5: Check EngineLifecycleSupport on each cluster. If any cluster shows Extended Support without intentional configuration, you are paying $0.10-0.20/vCPU-hour unnecessarily.

IaC Patterns for Cost-Conscious Aurora Deployments (CDK)

Infrastructure as Code gives you guardrails against expensive misconfigurations. Here are the patterns worth encoding from the start.

Provisioned cluster with Graviton3 (best price-performance for steady workloads):

import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as rds from 'aws-cdk-lib/aws-rds';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'Database', {
  engine: rds.DatabaseClusterEngine.auroraPostgres({
    version: rds.AuroraPostgresEngineVersion.VER_16_2,
  }),
  writer: rds.ClusterInstance.provisioned('writer', {
    // Graviton3 - better price-performance than Intel equivalents
    instanceType: ec2.InstanceType.of(ec2.InstanceClass.R7G, ec2.InstanceSize.LARGE),
  }),
  readers: [
    rds.ClusterInstance.provisioned('reader1', {
      instanceType: ec2.InstanceType.of(ec2.InstanceClass.R7G, ec2.InstanceSize.LARGE),
    }),
  ],
  vpc,
});

Hybrid pattern: provisioned writer plus Serverless v2 readers (flexible read scaling):

const cluster = new rds.DatabaseCluster(this, 'Database', {
  engine: rds.DatabaseClusterEngine.auroraMysql({
    version: rds.AuroraMysqlEngineVersion.VER_3_05_2,
  }),
  writer: rds.ClusterInstance.provisioned('writer', {
    instanceType: ec2.InstanceType.of(ec2.InstanceClass.R6G, ec2.InstanceSize.XLARGE4),
  }),
  // ACU range set at cluster level
  // Production: set to 0.5 minimum for warm instance
  // Dev/test: set to 0 to enable auto-pause (compute charges stop when idle)
  serverlessV2MinCapacity: 0.5,
  serverlessV2MaxCapacity: 64,
  readers: [
    // Scales with the writer for high availability
    rds.ClusterInstance.serverlessV2('reader1', { scaleWithWriter: true }),
    // Independent scaling for read traffic
    rds.ClusterInstance.serverlessV2('reader2'),
  ],
  vpc,
});

Key CloudFormation properties to encode explicitly:

  • StorageType: aurora-iopt1 for I/O-Optimized clusters (default is aurora for Standard)
  • EngineLifecycleSupport: open-source-rds-extended-support-disabled - this causes deployment to fail rather than silently entering Extended Support. You want this failure to be explicit.

For broader AWS CDK best practices around infrastructure as code, the patterns around resource defaults and policy guardrails apply directly to database cluster configuration.

Conclusion: Estimate Your Aurora Costs Before You Deploy

Aurora pricing has six independent dimensions. Most teams estimate two - compute and storage - and then encounter I/O charges, Extended Support fees, or orphaned snapshot costs as surprises.

The highest-impact decisions, in order:

  1. Standard vs I/O-Optimized: check your I/O percentage in Cost Explorer. If it exceeds 25% of total Aurora spend, switching to I/O-Optimized saves up to 40%. This is the single largest lever for existing clusters.
  2. Provisioned vs Serverless v2: Serverless v2 wins for variable workloads and dev/test. For sustained high-utilization workloads, provisioned plus Reserved Instances is cheaper - especially with 3-year RIs at up to 66% savings.
  3. Reserved Instances vs Database Savings Plans: stable configuration means RIs, expected changes or mixed database services means Savings Plans.
  4. Engine version: running Aurora PostgreSQL 12 or another EOL version today means you are paying Extended Support fees. Plan the upgrade.

Aurora DSQL is the only Aurora variant with a meaningful free tier and the only one designed for active-active multi-region. If you are building a new application and need multi-region writes, it deserves evaluation before you default to Serverless v2.

Once you account for all six dimensions, your Aurora bill becomes predictable. The Aurora Pricing Calculator lets you model your specific workload with actual numbers before you deploy.

Shift-Left Your FinOps Practice

Move cost awareness from monthly bill reviews to code review. CloudBurn shows AWS cost impact in every PR, empowering developers to make informed infrastructure decisions.

Frequently Asked Questions

Is there a free tier for Amazon Aurora?
No. Aurora is not included in the AWS RDS Free Tier, which only covers RDS MySQL, MariaDB, PostgreSQL, and SQL Server Express. The one exception is Aurora DSQL, which has an ongoing free tier of 100,000 DPUs and 1 GB of storage per month with no 12-month expiration.
Is Aurora I/O-Optimized worth it?
It depends on your I/O percentage. Check AWS Cost Explorer: if RDS:StorageIOUsage exceeds 25% of your total Aurora spend, I/O-Optimized will reduce your total cost by up to 40%. Below that threshold, Aurora Standard is cheaper because the lower storage rate ($0.10 vs $0.225/GB-month) outweighs the I/O savings.
When does Aurora Serverless v2 cost more than provisioned?
When your workload runs at high capacity more than 60-70% of the time. Serverless v2 has no Reserved Instance equivalent, so a provisioned instance with a 1-year RI (up to 45% savings) or 3-year RI (up to 66% savings) becomes cheaper for sustained high-utilization workloads. Serverless v2 wins for variable, spiky, or dev/test workloads where scaling to low capacity or zero eliminates waste.
How does Aurora DSQL pricing work?
DSQL bills on Distributed Processing Units (DPUs) at $8.00 per million DPUs in us-east-1/us-east-2, plus $0.33/GB-month for storage. DPUs combine compute and I/O into a single metric - there are no separate I/O charges. DSQL scales to zero when idle, and has an ongoing free tier of 100,000 DPUs and 1 GB storage per month.
Should I use Reserved Instances or Database Savings Plans for Aurora?
Use Reserved Instances (up to 45% at 1-year, up to 66% at 3-year) when your instance family, region, and engine are stable. Use Database Savings Plans (up to 35% at 1-year) when you expect changes to instance types, regions, or engines, or when you want a single discount covering multiple database services including DynamoDB, ElastiCache, and DocumentDB alongside Aurora.
What are the most common hidden costs in Amazon Aurora?
Three charges consistently surprise teams: I/O fees on Aurora Standard that equal or exceed compute costs for OLTP workloads, manual snapshots from deleted clusters that continue billing indefinitely, and RDS Extended Support charges ($0.10/vCPU-hour for Year 1-2, $0.20/vCPU-hour for Year 3) for Aurora PostgreSQL clusters running past community end-of-life dates.

Share this article on ↓

Subscribe to our Newsletter