ElastiCache Pricing: Extended Support Adds 80% (2026 Guide)

ElastiCache pricing has 4 compounding cost drivers. Extended Support adds 80% for Redis 4/5 users since Feb 2026. Full guide + free calculator.

April 1st, 2026
23 min read
0 views
0 likes

Most teams estimate their ElastiCache bill by looking at one node's hourly rate and multiplying by two. Then the invoice arrives and it is 3-4x what they expected.

I keep seeing the same pattern: teams are surprised not by the node cost itself, but by the four layers stacked on top of it. Amazon ElastiCache pricing has four compounding cost multipliers that nobody explains upfront: the 25% memory reservation that shrinks usable capacity, the replication node count that turns one logical cache into six billing nodes, cross-AZ data transfer charges that silently accumulate, and — since February 2026 — Extended Support charges that can add 80-160% to your Redis OSS node costs overnight.

This guide walks through every billing dimension with real numbers. You will get a break-even framework for choosing between Serverless and node-based deployments, the full before/after math on a Valkey migration, and a prioritized action list for cutting your bill today.

If you want to run the numbers for your specific configuration, the CloudBurn ElastiCache Pricing Calculator is the hands-on companion to this guide.


What You're Actually Paying For: 4 ElastiCache Pricing Multipliers

Most ElastiCache pricing guides list the hourly node rate and move on. The actual bill includes four compounding multipliers that stack on top of each other:

  1. The 25% memory reservation. ElastiCache reserves roughly 25% of each node's advertised memory for overhead. A 26 GiB node gives you about 19.7 GiB of usable keyspace, which means you need more nodes than the raw math suggests.
  2. The replication node count. A 3-shard cluster with 1 replica per shard runs 6 billing nodes, not 3. With 2 replicas per shard, that is 9 nodes. Your per-node cost multiplies by the total node count, not the shard count.
  3. Cross-AZ data transfer. In a multi-AZ deployment, 50-66% of your traffic crosses AZ boundaries at $0.01/GiB. At high throughput, this adds hundreds of dollars per month that never appear in pricing calculators.
  4. Extended Support charges (2026). Redis OSS v4 and v5 clusters enrolled in Extended Support on February 1, 2026 — adding an 80% premium that rises to 160% in year 3. This is the most urgent multiplier because it is avoidable with a zero-downtime upgrade.

Each section below covers one or more of these multipliers with real dollar examples so you can calculate your actual cost — not the sticker price.


The 2026 Alert: Extended Support Charges Are Live

Before we walk through every pricing dimension, there is one 2026-specific charge worth checking first — especially if you are running Redis OSS v4 or v5.

Standard support for Redis OSS v4 and v5 ended January 31, 2026. Any cluster still running these versions automatically enrolled in Extended Support on February 1, 2026. If you have not checked your engine version recently, this section could save you thousands of dollars a year.

Which Redis Versions Trigger Extended Support

Redis OSS v4.x and v5.x both enrolled in Extended Support on February 1, 2026. Redis OSS v6.2 and v7.0 are on a later timeline — check the ElastiCache Extended Support docs for their specific end-of-standard-support dates, because they are approaching.

To check your current engine versions, use the AWS Console and look at the Engine Version column under ElastiCache > Clusters, or run:

aws elasticache describe-cache-clusters \
  --query "CacheClusters[*].{ID:CacheClusterId,Engine:Engine,Version:EngineVersion}" \
  --output table

Valkey (all versions) and Redis OSS v6+ are currently in standard support and do not incur Extended Support charges.

The Real Cost Impact (With Dollar Examples)

The Extended Support premium is charged on top of your normal on-demand node rate. For a cache.m5.large in us-east-2 (on-demand rate: $0.1560/hr):

PeriodNode RatePremiumTotal per Node-Hour
Standard support (pre-Feb 2026)$0.1560/hr$0.1560/hr
Extended Support Year 1-2 (80% premium)$0.1560/hr$0.1248/hr$0.2808/hr
Extended Support Year 3 (160% premium)$0.1560/hr$0.2496/hr$0.4056/hr

For a minimal HA cluster running 1 primary and 2 replicas (3 nodes total):

  • Year 1-2 added cost: 3 nodes x $0.1248/hr premium x 8,760 hours = $3,279/year
  • Year 3 added cost: 3 nodes x $0.2496/hr premium x 8,760 hours = $6,558/year

That is money leaving your account right now if you have not upgraded. Check your AWS Cost Explorer for the ElastiCache Extended Support line item to confirm whether you are in this situation.

This is the same pattern AWS has applied to Amazon EKS Extended Support — a systemic shift in AWS pricing strategy for aging versions, not an ElastiCache-specific anomaly.

How to Stop Paying the Extended Support Premium

The fix is an in-place upgrade, and it is zero-downtime for clusters running Redis OSS 7.2.4 or below. You have two options:

  1. Upgrade to Valkey (recommended): Removes Extended Support charges immediately, drops your node cost by 20%, and positions you for Valkey 8.1 memory efficiency gains. Use the Service Update API to trigger an automatic upgrade to Valkey 8.
  2. Upgrade to Redis OSS v6+: Also removes Extended Support, though you miss the 20% Valkey pricing advantage.

If you hold Redis OSS Reserved Nodes, do not worry — they automatically apply to Valkey nodes in the same instance family and Region after the upgrade.


ElastiCache Pricing Models: Two Ways to Deploy

With the Extended Support question resolved, here is how ElastiCache pricing works across both deployment models.

ElastiCache offers two fundamentally different billing approaches: Serverless (pay per unit of work) and node-based (pay per node-hour). The choice is not just about cost — it also determines which features are available to you.

FeatureServerlessNode-Based
Pricing modelPer GB-hour + per ECPUPer node-hour
Capacity planningNone requiredUser-managed
Global DatastoreNoYes
Data tieringNoYes (r6gd family)
Reserved NodesNot availableAvailable
Availability SLA99.99%Varies by config
Minimum billing100 MB/hr (Valkey)Per partial node-hour

The key things to note here: Global Datastore, data tiering, and Reserved Nodes are all node-based exclusive. If you need cross-region replication or large-dataset SSD tiering, Serverless is off the table regardless of cost.

ElastiCache Serverless Pricing (ECPUs and GB-Hours Explained)

Serverless charges on two dimensions simultaneously: data stored (GB-hours) and compute consumed (ECPUs).

GB-hours are straightforward — ElastiCache samples your stored data multiple times per minute and bills you for the hourly average. At $0.0837 per GB-hour for Valkey in us-east-1, a 10 GB cache costs $0.837/hr in storage charges alone.

ECPUs (ElastiCache Processing Units) are the compute dimension. The definition is specific: 1 ECPU equals 1 KB of data transferred per request. A GET returning 3.2 KB consumes 3.2 ECPUs. Complex commands like SORT, ZADD, and ZRANK consume proportionally more ECPUs based on the vCPU time they require. ElastiCache charges whichever is higher per command — the data-transfer dimension or the vCPU dimension.

Two official examples from the AWS pricing page illustrate the range:

  • Example 1 (10 GB cache, 50K RPS): $0.837/hr storage + $0.409/hr ECPUs = $1.246/hr (~$895/month)
  • Example 4 (gaming leaderboard, 100 GB, 500K RPS with SortedSets): $8.37/hr storage + $12.30/hr ECPUs = $20.67/hr (~$14,882/month)

One important minimum billing rule: Valkey Serverless has a 100 MB/hr minimum versus 1 GB/hr for Redis OSS and Memcached. For dev caches that are nearly empty, Valkey Serverless starts at about $6/month while Redis OSS Serverless starts at around $60/month. That 10x difference matters when you are running a dozen staging environments.

Billing starts the moment the cache enters "Available" state and stops only when you delete it — there is no pause feature. Keep that in mind for dev environments.

Node-Based Cluster Pricing (On-Demand)

Node-based billing is simpler on the surface: you pay per node-hour, partial hours billed as full hours, from the time the node enters "Available" state until you terminate it.

The current-generation node families worth knowing:

  • T4g/T3 (burstable): Dev/test only. Performance throttles when CPU credits are exhausted. Do not use for production.
  • M7g/M6g (Graviton3/2, general purpose): Balanced memory and compute. Good for mixed workloads.
  • R7g/R6g (Graviton3/2, memory-optimized): Best for cache-heavy workloads. Default to these.
  • R6gd (data tiering): DRAM + NVMe SSD. Only for large datasets — covered in the data tiering section.
  • C7gn (network-optimized): Up to 200 Gbps bandwidth. For extremely high-throughput workloads.

Go Graviton (M7g, R7g) by default unless you have a specific reason not to. The price-performance ratio is better than Intel equivalents, and all current engines support them.

The 25% memory reservation. This is the single most common source of "my bill is higher than expected" complaints. ElastiCache reserves approximately 25% of each node's advertised memory for its own overhead — OS, replication buffers, and connection handling. A cache.r6g.xlarge with 26.32 GiB spec gives you roughly 19.7 GiB of usable keyspace.

Size your cluster using this formula:

nodes_needed = dataset_GB / (node_memory_GB x 0.75)

A 100 GB dataset on cache.r6g.xlarge nodes: 100 / (26.32 x 0.75) = 5.06, so you need 6 nodes minimum to fit the dataset without evictions.

The replication multiplier. This is the second compounding factor. A 3-shard cluster with 1 replica per shard runs 6 nodes simultaneously (3 primaries + 3 replicas). With 2 replicas per shard, that is 9 nodes. You are not paying 3x — you are paying 6x or 9x the single-node cost. That is how a $0.350/hr node becomes a $2.10/hr or $3.15/hr cluster billing rate.


Serverless vs. Node-Based: Which Is Cheaper for Your Workload?

Both deployment models can be the right choice — it depends entirely on your traffic pattern. Serverless wins when traffic is spiky or unpredictable. Node-based with Reserved Nodes wins when traffic is steady and predictable. The mistake is assuming one is always cheaper than the other without running the numbers.

The Break-Even Formula

Here is the math to find your crossover point:

Serverless hourly cost = (GB stored x $0.0837) + (requests per hour x avg KB per request x $0.00227 / 1,000,000)

Node-based reserved hourly cost = node count x reserved hourly rate

The crossover depends on two factors: how much data you store continuously and how steady your request rate is. When data is stored at high volume all day and requests are consistent, node-based plus Reserved Nodes almost always wins.

A practical rule of thumb: if your cache holds more than 50 GB continuously and your request rate stays above 100K RPS for 20 or more hours per day, a reserved r6g.xlarge or r7g.xlarge cluster will nearly always beat Serverless on cost. Below that threshold, Serverless deserves a serious look.

Three Workload Scenarios With Real Numbers

ScenarioServerless (Valkey)Node-Based (On-Demand)Winner
Dev/test (2 GB, 5K RPS)~$6/month (minimum)cache.t4g.medium ~$35/monthServerless
Moderate production (10 GB, 50K RPS, steady)~$895/month3-node r6g.large reserved (1yr No Upfront) ~$350/monthNode-Based
Spiky workload (10 GB baseline → 100 GB for 2 hrs/day, 1M RPS peak)$2.899/hr12x r7g.xlarge on-demand $5.66/hrServerless (49% cheaper)

The spiky workload scenario comes directly from AWS's official pricing page and illustrates the point well. When you provision for peak traffic, that capacity sits idle for 22 hours a day. Serverless eliminates that idle cost.


Valkey vs. Redis OSS: The Cost Comparison That Matters in 2026

Regardless of which deployment model you choose, the engine you run matters — and in 2026, Valkey has a clear pricing advantage over Redis OSS.

Valkey is a Linux Foundation-backed, BSD-licensed fork of Redis OSS 7.2, maintained by AWS, Google Cloud, Oracle, ByteDance, and 50+ other companies. It is fully API-compatible with Redis OSS. You get zero-downtime in-place upgrades for clusters on Redis OSS 7.2.4 or below, and the existing Reserved Nodes carry over automatically. Valkey 8.2 also adds vector search at no additional cost, if you are evaluating ElastiCache for similarity search workloads.

The Price Difference (Node-Based and Serverless)

EngineNode-Based PriceServerless PriceServerless Minimum
ValkeyBaselineBaseline100 MB/hr (~$6/month)
Redis OSS+20% vs Valkey+33% vs Valkey1 GB/hr (~$60/month)
Memcached+20% vs Valkey+33% vs Valkey1 GB/hr (~$60/month)

For a concrete node example: a cache.r7g.xlarge runs at $0.350/hr for Valkey versus approximately $0.437/hr for Redis OSS. That 20% difference compounds across every node in your cluster, every hour of the year.

Memcached carries the same pricing penalty as Redis OSS and adds more limitations: no encryption, no data tiering, no Global Datastore, no compliance certifications (PCI DSS, HIPAA). There is rarely a good reason to choose Memcached for new deployments.

Valkey Migration ROI: A Real Before/After Calculation

Let me walk through the full math for a realistic production cluster: 3 shards x 1 replica per shard = 6 nodes of cache.r7g.xlarge.

Baseline (Redis OSS, on-demand): 6 nodes x $0.437/hr x 720 hours = $1,888/month ($22,656/year)

After Valkey migration (same cluster, on-demand): 6 nodes x $0.350/hr x 720 hours = $1,512/month ($18,144/year) Savings: $376/month, $4,512/year

With Reserved Nodes (All Upfront, 3-year, ~55% off on-demand): 6 nodes x $0.157/hr x 720 hours = $679/month ($8,148/year) Total savings vs. Redis OSS on-demand: approximately $14,508/year

Valkey 8.1 memory efficiency bonus: Valkey 8.1 reduces memory overhead by up to 20% through a new hash table implementation. That 20% memory reduction can allow a 50% downsize of node type (e.g., from r6g.xlarge to r6g.large). The combined savings formula:

total_savings = 1 - (0.8 price_factor x 0.5 node_size_factor) = 60%

A 60% reduction from the Redis OSS on-demand baseline is what AWS highlighted in their Alight Solutions case study — and the math checks out.

The Reserved Node Coverage Bonus When You Upgrade

Here is something no competitor mentions: if you already hold Redis OSS Reserved Nodes, upgrading to Valkey gives you more coverage for the same cost.

Because Valkey's normalized units are 20% lower than Redis OSS (6.4 units for a Valkey xlarge vs. 8 units for a Redis OSS xlarge), an existing Redis OSS RI covers more Valkey nodes.

A Redis OSS cache.r7g.4xlarge RI has 32 normalized units. After upgrading to Valkey, your running node only uses 25.6 units — leaving 6.4 units that can cover a full cache.r7g.xlarge Valkey node (also 6.4 units) at no additional cost.

From the official docs: if you purchase 5 cache.r7g.2xlarge Redis OSS Reserved Nodes and upgrade to Valkey, you can run a sixth cache.r7g.2xlarge Valkey node without buying another RI. That is 20% more capacity from the same reservation spend.


Discounts and Commitments: Reserved Nodes and Savings Plans

Once you understand base costs, the next question is how to lock in a lower rate for workloads you know will run long-term. Two discount models are available, and the right choice depends on your situation.

Reserved Nodes offer up to 55% off and are ElastiCache-specific. Database Savings Plans (launched December 2025) offer up to 35% off with more flexibility across services. If you run Serverless, Savings Plans are your only option — Reserved Nodes are not available for Serverless.

Reserved Nodes (Up to 55% Off)

Reserved Nodes require a 1-year or 3-year term commitment to a specific Region and node family. In return, you get:

Payment OptionDiscount vs. On-DemandCash Flow
No UpfrontUp to 48.2%Pay discounted hourly rate throughout term
Partial UpfrontUp to 52%Partial payment now + lower hourly rate
All UpfrontUp to 55%Full payment now, no hourly charges

The All Upfront 3-year option gives the maximum discount but requires certainty about node type and Region for 3 years. The No Upfront 1-year option gives flexibility at a smaller discount — useful when you are not yet sure which node family you need long-term.

Reserved Nodes are size-flexible within an instance family. A cache.r6g.xlarge RI (8 normalized units for Redis OSS; 6.4 for Valkey) applies to any combination of r6g nodes adding up to the same unit count. You do not have to match the exact size you reserved.

Strategy: run on-demand for 30-90 days to establish your actual usage baseline, then commit. Do not commit before you understand your steady-state node type — the non-refundable upfront fee is gone if you need to change course. Maximum 300 Reserved Nodes per account; Region, node class, and term cannot be changed after purchase.

Database Savings Plans (The December 2025 Alternative)

Database Savings Plans launched in December 2025 and represent a genuinely different model. Instead of committing to specific instance types, you commit to a dollar-per-hour spend rate. AWS automatically applies the discount across whatever eligible database services you actually use.

The coverage list is broad: Aurora, RDS, DynamoDB, ElastiCache, DocumentDB, Neptune, Keyspaces, Timestream, and DMS — one commitment covers all of them, regardless of engine, instance family, size, or Region.

For ElastiCache specifically, the Savings Plan rates are roughly:

  • Node-based instances: ~20% off on-demand
  • Serverless: ~30% off on-demand

The maximum discount is lower than Reserved Nodes (35% vs. 55%), but the flexibility is significantly higher. This makes Savings Plans the right choice when you run multiple database services or when you need to be able to shift capacity across Regions without forfeiting your commitment.

Importantly, Savings Plans are the only discount option for Serverless workloads. Check the AWS Database Savings Plans pricing page for current rates by instance type and Region. Not available in China Regions.

Reserved Nodes vs. Savings Plans: Decision Table

ScenarioRecommendation
Node-based cluster, stable long-term (3+ years), single RegionReserved Nodes — All Upfront 3-year for 55% off
Node-based cluster, predictable but only committing 1 yearReserved Nodes No Upfront (48%) or Savings Plans (20%) — compare effective rates
Serverless workload with predictable usageDatabase Savings Plans — only option for Serverless
Running ElastiCache + Aurora + RDSDatabase Savings Plans — one commitment covers all three
Variable or unpredictable workloadOn-demand or Serverless — no commitment

For cross-service context: if you are also evaluating Amazon RDS pricing or Amazon Aurora pricing, the Database Savings Plans decision applies across all three services simultaneously. A single commitment can cover all your managed database spend.


Data Tiering: 62% Savings for Large Datasets

If your cache holds a terabyte or more of data, data tiering is worth understanding before you commit to a node type.

Data tiering uses NVMe SSD storage on r6gd nodes alongside DRAM. When your DRAM fills up, ElastiCache automatically moves least-recently-used values to SSD. Keys always stay in DRAM — only values migrate. When a value in SSD is accessed, it moves back to DRAM.

R6gd nodes provide 4.8x more total capacity (DRAM + SSD) compared to R6g memory-only nodes of the same size. That capacity difference translates directly to cost savings at scale.

From the AWS pricing page: storing 1 TiB of data using data tiering on a single cache.r6gd.16xlarge costs $9.9816/hr. Storing the same dataset on cache.r6g.16xlarge nodes requires 4 nodes at $26.27/hr combined — a 62% cost difference.

ApproachNodes RequiredHourly CostMonthly Cost
Data tiering (r6gd.16xlarge)1$9.98/hr~$7,186/month
All-memory (r6g.16xlarge)4$26.27/hr~$18,914/month
Savings$16.29/hr~$11,728/month (62%)

But data tiering is not a universal win. The trade-offs are real:

  • SSD-resident values incur approximately 300 microseconds of additional latency per access (assuming 500-byte values). For latency-sensitive workloads, this matters.
  • Your application access pattern must concentrate on 20% or less of the total dataset. If you access data randomly across the entire keyspace, SSD items will constantly migrate back to DRAM and the latency overhead affects most requests.
  • Items larger than 128 MiB are not moved to SSD. Valkey 8.1 adds another constraint: items where key + value together are smaller than 40 bytes also stay in DRAM.
  • Requires r6gd node family, Valkey 7.2+ or Redis OSS 6.2+, and a replication group (no standalone clusters).

Reserved Node pricing is available for r6gd nodes, so you can combine data tiering with a 55% reserved discount for the lowest possible cost on large datasets.


Additional Cost Dimensions

Beyond node hours and ECPU/storage charges, four more line items appear on the ElastiCache bill. Most are predictable and controllable with the right architecture decisions.

Backup Storage ($0.085/GiB-Month)

Backup storage is $0.085 per GiB per month, uniform across all AWS Regions. There is no data transfer fee when creating or restoring a backup.

Backups apply to Valkey and Redis OSS only — Memcached does not support them. The cost adds up when retention periods exceed what your business actually needs. A 35-day retention on a 100 GB cache costs $8.50/month more than a 7-day retention over the same period. Set retention to match your recovery point objective, not the maximum allowed.

Data Transfer (The Cross-AZ Charge Most Teams Miss)

This is where the fourth compounding cost multiplier lives. The rule is simple: same-AZ access is free, cross-AZ access is $0.01/GiB — charged on the EC2 side, not the ElastiCache side.

In a typical multi-AZ cluster, 50-66% of your read and write traffic crosses AZ boundaries (depending on node placement and replication configuration). At high throughput, this compounds fast: 293 GB/hr with 50% cross-AZ traffic costs $1.46/hr in cross-AZ data transfer charges alone — entirely separate from node costs.

For ElastiCache Serverless, accessing from the same AZ as your VPC endpoint is free. Cross-AZ access from a different AZ incurs the standard $0.01/GiB rate.

The mitigation is straightforward: deploy EC2 application instances in the same AZ as the ElastiCache primary node, and create read replicas in the same AZ as your EC2 readers. You do not eliminate cross-AZ data transfer entirely (replication between primaries and replicas still happens), but you eliminate the EC2-side charge for your application traffic.

Global Datastore (Cross-Region Replication Costs)

Global Datastore adds no service fee on top of regular node charges. What you pay is: node charges in every participating Region (primary and secondary clusters combined), plus inter-Region data transfer for replication traffic.

Cross-region replication traffic is charged at standard AWS inter-region data transfer rates — approximately $0.02/GiB for traffic out of US Regions.

A write-heavy session store example from the AWS pricing page: US East primary + US West secondary, 18 nodes of cache.m7g.xlarge on 3-year All Upfront reserved at $0.114/hr each:

  • Node charges: 18 nodes x $0.114/hr = $2.052/hr
  • Cross-AZ + cross-region data transfer: $1.43/hr
  • Total: $3.48/hr (~$2,506/month)

Global Datastore is available for node-based clusters only, not Serverless. It supports Valkey and Redis OSS, not Memcached.

Regional Pricing Variance

Node pricing varies by Region — APAC and EU Regions typically run 10-30% higher than us-east-1 for the same node type. Backup storage is flat at $0.085/GiB-month across all Regions with no variation.

AWS Asia Pacific (Thailand) and Mexico (Central) support only M7g, R7g, T3, and T4g node types — plan accordingly if those Regions are in scope.

GovCloud and China Regions do not include the AWS Free Tier for ElastiCache, whether that is the pre-July 2025 cache.t3.micro allocation or the post-July 2025 credits model.

Use the AWS Pricing Calculator for region-specific on-demand rates if you need exact figures for a non-standard Region.


How to Cut Your ElastiCache Bill

With a clear picture of every cost dimension, here is the prioritized action list. Not all of these apply to every workload — start at the top and work down.

Priority 1 — Fix Extended Support immediately. If any cluster is running Redis OSS v4 or v5, upgrade to Valkey now. Zero-downtime upgrade, stops an 80-160% premium on all affected node hours. Existing RIs carry over automatically.

Priority 2 — Migrate to Valkey. Even if you are not in Extended Support, the 20% lower node cost and 33% lower Serverless cost are straightforward savings. Add Valkey 8.1's memory efficiency for potential combined savings of up to 60% versus Redis OSS on-demand.

Priority 3 — Commit to Reserved Nodes or Savings Plans. Run on-demand for 90+ days to establish your baseline, then commit. All Upfront 3-year Reserved Nodes give 55% off for stable node-based workloads. Database Savings Plans give up to 30% off for Serverless, or if you need cross-service flexibility with RDS or Aurora — check the DynamoDB pricing guide to understand how the same commitment covers that service too.

Priority 4 — Right-size using CloudWatch metrics. Monitor these four metrics for sizing signals:

  • FreeableMemory: approaching 0 means scale up
  • Evictions: non-zero means insufficient memory (unless your eviction policy explicitly allows it)
  • CPUUtilization and EngineUtilization: consistently above 80% means scale up
  • SwapUsage: should be near zero — elevated swap indicates memory pressure

Scale down when metrics show sustained underutilization. Do not size for peak traffic — size for sustained average and let Serverless handle overflow if possible.

Priority 5 — Reduce cross-AZ data transfer. Co-locate EC2 instances with ElastiCache nodes in the same AZ. Place read replicas in the same AZ as EC2 readers. For Serverless, configure VPC endpoint AZs to match your application instances.

Priority 6 — Evaluate data tiering for large datasets. For caches holding 1 TB or more where your access pattern concentrates on 20% of data, r6gd nodes with data tiering can save 62% versus all-memory nodes. The 300 microsecond SSD latency overhead must be acceptable for your workload.

Priority 7 — Use Serverless for spiky or dev workloads. Eliminates idle capacity cost. Valkey Serverless starts at $6/month minimum — much cheaper than running an on-demand node for occasional dev use.

Use cost allocation tags and the AWS CUDOS dashboard to identify oversized or idle clusters across your account. The CUDOS framework includes ElastiCache-specific views for tracking Valkey adoption and identifying upgrade candidates. For a broader framework, the AWS Well-Architected Cost Optimization Pillar covers right-sizing and commitment strategies that apply directly to ElastiCache.

Right-Size Before You Commit

Committing to Reserved Nodes before you understand your steady-state node type is the most common Reserved Node mistake. You pay the non-refundable upfront fee and cannot change the node family or Region afterward.

The minimum safe observation period is 30 days; 90 days is better for workloads with weekly or monthly traffic cycles. Watch the CloudWatch metrics listed above. For data-tiered clusters, check CurrItems with the Tier=Memory dimension — if in-memory items drop below 5% of total items, the SSD tier is saturating and you need to scale.

Do not scale nodes for peak capacity. Size for your sustained average and use auto-scaling or Serverless for peaks.

Use the CloudBurn ElastiCache Pricing Calculator

Running these calculations manually for every workload variation takes time. The CloudBurn ElastiCache Pricing Calculator lets you select engine, deployment model, node type, region, and commitment term to get a monthly cost estimate — and compare Serverless versus node-based at your actual data size and request rate. Free to use, no account required.

CloudBurn

Shift-Left Your AWS Cost Optimization

CloudBurn runs deterministic cost rules against your IaC in CI and your live AWS account in production. Catch expensive ElastiCache patterns before they ship. Open source, install with brew or npm.


Key Takeaways

  • Four compounding cost multipliers explain most ElastiCache bill shock: 25% memory reservation, replication node count, cross-AZ data transfer, and Extended Support charges.
  • Extended Support is live since February 2026 for Redis OSS v4 and v5 — an 80% premium in years 1-2, 160% in year 3. Check your engine versions now.
  • Valkey is 20% cheaper on node-based clusters and 33% cheaper on Serverless. In-place zero-downtime upgrade. Existing Redis OSS Reserved Nodes carry over.
  • Serverless wins for spiky workloads; node-based with Reserved Nodes wins for steady, high-volume workloads. Run the numbers — do not assume.
  • Database Savings Plans (December 2025) are the only discount option for Serverless, and offer cross-service flexibility if you run other managed databases.
  • Data tiering delivers 62% savings at 1 TiB scale if your access pattern concentrates on 20% of the dataset and 300 microsecond SSD latency is acceptable.

The highest-impact moves in order: fix Extended Support, migrate to Valkey, commit to Reserved Nodes or Savings Plans once you have 90 days of baseline data.

What has your experience been with ElastiCache costs? If you are seeing bill patterns not covered here — or if the Valkey migration math looks different in practice for your workload — drop a comment below.

If you are evaluating Database Savings Plans across multiple services, check out the Amazon RDS pricing and Amazon Aurora pricing guides next — the same commitment covers all three.

Frequently Asked Questions

Is ElastiCache free on AWS?
AWS accounts created before July 15, 2025 received 750 hours of cache.t3.micro per month for 12 months. Accounts created after July 15, 2025 receive $100 in credits (up to $200 when activating foundational AWS services) that apply to all ElastiCache features including Serverless. GovCloud and China Regions are excluded from the Free Tier entirely. Once credits are used or expired, standard rates apply.
What is an ECPU in ElastiCache Serverless?
ECPU stands for ElastiCache Processing Unit — the compute billing unit for Serverless. Each 1 KB of data transferred in a request (GET, SET) consumes 1 ECPU. Complex commands like SORT, ZADD, and ZRANK consume proportionally more ECPUs based on the vCPU time they require. ElastiCache charges whichever is higher per command: the data-transfer dimension or the vCPU dimension.
Can I pause an ElastiCache Serverless cache to stop billing?
No. ElastiCache has no pause feature. Billing runs from the time the cache enters Available state until it is deleted. For Valkey Serverless, the minimum charge is 100 MB per hour (about $6/month) even when completely idle. If you use Terraform, set cache_usage_limits to cap maximum storage and ECPUs per second and prevent unexpected cost spikes.
Are Database Savings Plans better than Reserved Nodes for ElastiCache?
It depends on your situation. Reserved Nodes offer up to 55% off but require committing to a specific instance family and Region. Database Savings Plans cap out at 35% for instances and 30% for Serverless, but apply automatically across Aurora, RDS, DynamoDB, and other managed databases without any instance-type lock-in. If you run Serverless, Savings Plans are your only discount option — Reserved Nodes are not available for Serverless.
How much does a production ElastiCache cluster cost per month?
A minimal HA cluster (1 primary + 1 replica, cache.r6g.large Valkey, on-demand) runs about $175/month. A standard production cluster (3 shards x 1 replica per shard = 6 nodes, cache.r7g.xlarge Valkey, on-demand) runs about $1,512/month. The same cluster with All Upfront 3-year Reserved Nodes comes down to approximately $680/month. Add cross-AZ data transfer, backup storage, and Global Datastore charges on top of node costs.
What is the cheapest way to run a highly available ElastiCache cluster?
Start with Valkey engine for the 20% lower baseline price. Use Graviton nodes (M7g or R7g series) for the best price-performance ratio. Commit with All Upfront 3-year Reserved Nodes for up to 55% off on-demand. For datasets over 1 TB, use r6gd nodes with data tiering for an additional 62% savings versus all-memory nodes. Keep EC2 and ElastiCache in the same AZ to eliminate cross-AZ transfer charges.
What is ElastiCache Extended Support and how much does it cost?
Extended Support is a paid program that keeps ElastiCache running Redis OSS versions past their end-of-standard-support date. For Redis OSS v4 and v5, standard support ended January 31, 2026. From February 1, 2026, these clusters pay an 80% premium on top of the normal on-demand rate in years 1 and 2, rising to 160% in year 3. The fix is an in-place, zero-downtime upgrade to Valkey or Redis OSS v6+.
Is Valkey cheaper than Redis OSS on ElastiCache?
Yes, by a significant margin. Valkey is 20% cheaper for node-based clusters and 33% cheaper for Serverless deployments. Valkey Serverless also has a 100 MB minimum versus 1 GB for Redis OSS, making it 10x cheaper for near-empty dev caches. Valkey 8.1 adds up to 20% memory efficiency, which can enable a 50% node downsize for a combined 60% savings versus Redis OSS on-demand.

Share this article on ↓

Related reading

Newsletter

Get product updates and practical AWS cost writeups.

Subscribe for changelogs, new tools, and technical cost optimization posts built for engineers.

By signing up you agree to our privacy policy.