Amazon EKS Pricing: Complete 2026 Guide + Calculator

The EKS pricing guide that covers Auto Mode surcharges, Capabilities fees, and the extended support cost trap. Free calculator to estimate your cluster bill.

March 13th, 2026
0 views
--- likes

Amazon EKS charges $73/month per cluster before a single pod runs. Most AWS services charge for what you use — EKS charges whether you're using it or not.

That's the first number you need to internalize. Teams running dev, staging, and production clusters are paying $219/month in control plane fees before any workloads exist. And that's the cheap version. If you let your Kubernetes version fall behind schedule, the same three clusters cost $1,314/month just in cluster fees — nothing else changed except the version number.

This post covers every billing dimension in EKS including the newer ones (Auto Mode, Capabilities) that most guides haven't caught up to yet. You'll get three real-world cost scenarios, a framework for deciding between EKS and ECS, and CDK examples that bake cost decisions into your infrastructure code. For a quick estimate of your specific setup, use the EKS pricing calculator to model it directly.

All pricing reflects AWS US East (N. Virginia) rates from the official EKS pricing page as of March 2026. Rates vary by Region.

What EKS Actually Costs: The Short Answer

Before getting into the mechanics, here are the ranges:

Cluster TypeMonthly EstimateKey Cost Drivers
Dev/startup (3x t3.medium, single-AZ)~$150-200/monthControl plane + small EC2 instances
Production HA (10x m5.xlarge, multi-AZ)~$1,200-1,600/monthCompute + NAT Gateways + ALB + CloudWatch
Enterprise (50 nodes + extended support)~$12,000-18,000+/monthExtended support + Provisioned Control Plane + large compute

Four billing layers make up any EKS bill:

The control plane fee is fixed. Compute is the dominant variable cost. Networking costs are often underestimated by teams new to EKS — cross-AZ traffic and NAT Gateway charges can add 20-40% on top of compute. Add-ons are optional but carry real price tags once enabled.

Let's work through each layer from the ground up.

EKS Control Plane Pricing (The $73/Month Floor)

Every EKS cluster pays a per-cluster-per-hour fee for the managed Kubernetes control plane. There is no free tier, and there is no minimum duration — even a cluster running for one hour costs $0.10.

What you get for that fee: AWS runs the Kubernetes API server, etcd cluster, and scheduler across multiple Availability Zones with automated patching and high availability. It's not nothing. Running this yourself on EC2 would cost more and requires significant operational expertise to do properly.

The per-cluster economics change as your cluster count grows, though. Three clusters — dev, staging, production — costs $219/month in control plane fees before you've placed a single pod.

Standard vs Extended Support

The control plane fee depends on which Kubernetes version support tier your cluster is on:

Support TierPriceEquivalent Monthly Cost
Standard Kubernetes version support$0.10 per cluster per hour$73/month
Extended Kubernetes version support$0.60 per cluster per hour$438/month

Standard support is available for the first 14 months after a Kubernetes version is released in EKS. Extended support covers the next 12 months (26 months total from release).

AWS Outposts local clusters pay the same standard support rate but don't support extended support — if you're on Outposts, version upgrade timelines matter even more.

The Extended Support Cost Trap

This is the most expensive mistake I see teams make with EKS. The $0.10-to-$0.60 jump isn't a minor pricing adjustment — it's a 6x increase that's easy to miss until it shows up on your bill.

Warning: If your cluster falls behind on Kubernetes version upgrades and enters extended support, the control plane fee jumps from $0.10/hr to $0.60/hr — from $73/month to $438/month per cluster. Three clusters in extended support = $1,314/month in control plane fees alone.

The version lifecycle per the Kubernetes version support documentation:

If a cluster runs the full 26-month lifecycle without upgrading (14 standard + 12 extended), the average control plane cost is $0.33/hr — more than 3x the standard rate.

The fix is simple: treat Kubernetes version upgrades as routine quarterly operations, not emergencies. Fall behind by one version cycle and you're paying $365/month extra per cluster for nothing but running old software.

Worker Node Compute Costs (Where the Real Money Goes)

The control plane fee is a fixed overhead. Compute is where EKS costs actually scale — and it's the dominant line item for almost every cluster.

EKS gives you three main compute models, and each has different pricing mechanics. You can mix them in a single cluster, which is often the optimal cost strategy.

EC2 Managed Node Groups

With managed node groups, you pay standard EC2 prices. There is no EKS-specific surcharge on top of EC2. On-Demand, Spot, Reserved Instances, and Savings Plans all apply exactly as they do for regular EC2 usage.

This matters because EC2 is the largest cost driver in most EKS clusters. The instance pricing you already know from EC2 pricing is exactly what you'll pay here. If you're running m5.xlarge On-Demand at $0.192/hr, that's your node cost. Ten of them running 730 hours a month = $1,401.60 in compute alone.

Use our EC2 pricing calculator to compare instance types and find the right size for your workload before committing to a configuration.

EKS Auto Mode Pricing (The ~12% Surcharge Explained)

EKS Auto Mode launched in November 2024. It manages node provisioning automatically using a Karpenter-based system — you define your workload requirements and AWS handles everything else: selecting instance types, scaling up and down, and replacing nodes every 21 days.

The cost structure is three layers: standard EC2 price + EKS cluster fee + Auto Mode management fee per managed instance.

Here's the part that surprises people: the Auto Mode management fee adds approximately 12% on top of your EC2 On-Demand instance cost. Here's the worked example from AWS's pricing page for a multi-pod application in us-west-2:

EC2 InstanceEC2 Cost/hrAuto Mode Fee/hr
c6a.2xlarge$0.306$0.03672
c6a.4xlarge$0.612$0.07344
m5a.2xlarge$0.344$0.04128
m5a.xlarge$0.172$0.02064
Total/month$1,046.82$125.62

The 12% overhead is real money at scale — $1,172.44/month total vs $1,046.82 for the same instances managed yourself.

There's another nuance worth knowing: Savings Plans and Reserved Instances don't discount the Auto Mode management fee. You can reduce your EC2 instance costs with Compute Savings Plans, but the Auto Mode fee stays at the On-Demand rate regardless of your purchase commitment. That partially offsets the "buy Savings Plans" strategy if you're running Auto Mode.

Auto Mode nodes also have a 21-day maximum runtime and are automatically replaced — useful for preventing configuration drift but worth planning for if you're relying on node-local persistent storage.

If you have more than 150 nodes across your organization, contact the AWS account team for additional pricing.

Auto Mode vs self-managed Karpenter: Auto Mode adds the ~12% surcharge but requires zero operational maintenance. Self-managed Karpenter has no management fee but you own upgrades, configuration, and troubleshooting. At smaller scale the surcharge is easier to justify; at larger scale the Karpenter savings add up.

Fargate on EKS

Fargate charges per vCPU-second and GB-second, measured from when your container image starts downloading until pod termination. The 1-minute billing minimum means very short-lived pods are expensive per unit of work.

Linux x86 pricing in us-east-1:

ResourcePer SecondPer Hour
vCPU$0.000011244$0.04046
Memory (per GB)$0.000001235$0.00445

ARM/Graviton2 Fargate is approximately 20% cheaper than x86. If your containers support ARM (multi-arch images), this is a free 20% discount by switching the Fargate compute configuration.

Supported Fargate pod configurations:

CPUMemory Range
0.25 vCPU0.5–2 GB
0.5 vCPU1–4 GB
1 vCPU2–8 GB
2 vCPU4–16 GB
4 vCPU8–30 GB
8 vCPU16–60 GB
16 vCPU32–120 GB

20 GB of ephemeral storage is included per pod. Additional storage is billed separately.

One important clarification that trips up teams migrating from ECS: Fargate Spot is available for ECS but is NOT available for EKS pods. If you're planning on Fargate Spot to reduce costs on EKS workloads, that's not an option. Compute Savings Plans do apply to Fargate (up to 50% off), which helps for predictable Fargate workloads.

For a pricing comparison showing when Fargate on EKS makes sense versus EC2 node groups, see the ECS vs Fargate launch type guide — the same trade-off analysis applies to EKS.

EKS Provisioned Control Plane (For High-Scale Clusters)

For clusters with high API server request volumes or very large node counts, AWS offers pre-provisioned control plane capacity. This is an additional fee on top of the standard $0.10/hr cluster fee, not a replacement.

TierAdditional Fee per HourAdditional Monthly Cost
XL+$1.65/hr+$1,205/month
2XL+$3.40/hr+$2,482/month
4XL+$6.90/hr+$5,037/month

At the XL tier running all month: $0.10 standard + $1.65 XL = $1.75/hr = $1,277/month for the control plane alone, before a single node runs. AWS's example shows mixing: 15 days at standard ($36) + 15 days at XL ($594) = $630 total.

This is relevant for enterprise clusters with 1,000+ nodes or workloads generating high Kubernetes API server traffic (frequent deployments, heavy autoscaling, large Helm releases). Most clusters will never need it. For needs beyond 4XL, contact your AWS account team.

EKS Capabilities Pricing: Argo CD, ACK, and KRO

This is the section no competitor has covered. EKS Capabilities are managed integrations for GitOps tooling and Kubernetes-native AWS service management — all launched in 2025. If you've enabled any of them, you're paying a billing dimension that most EKS pricing guides don't even mention.

Each Capability has two billing components: a base hourly rate per enabled Capability, plus a per-resource-hour usage charge.

Argo CD Capability

ComponentRate
Base rate$0.03 per Capability-hour
Usage$0.0015 per Argo CD Application-hour

100 Argo CD Applications running for a month (730 hours):

  • Base: $0.03 × 730 = $21.90/month
  • Usage: 100 applications × $0.0015 × 730 = $109.50/month
  • Argo CD total: $131.40/month

ACK (AWS Controllers for Kubernetes) Capability

ComponentRate
Base rate$0.005 per Capability-hour
Usage$0.00005 per ACK resource-hour

1,000 ACK-managed resources for a month:

  • Base: $0.005 × 730 = $3.65/month
  • Usage: 1,000 × $0.00005 × 730 = $36.50/month
  • ACK total: $40.15/month

KRO (Kubernetes Resource Orchestrator) Capability

Same pricing structure as ACK:

  • Base: $3.65/month
  • Usage (1,000 RGD instances): $36.50/month
  • KRO total: $40.15/month

Running all three Capabilities with 100 Argo CD Applications and 1,000 resources each costs $211.70/month. That's on top of your control plane fee, compute, and networking costs. Worth knowing before you enable them for a large cluster.

The alternative is self-managing open-source Argo CD on your cluster nodes. No direct software cost, but you own upgrades, high availability configuration, and incident response. The managed Capability makes sense if your team's time is more valuable than $131/month.

EKS Hybrid Nodes Pricing (On-Premises and Edge)

If you're connecting on-premises servers to EKS clusters via Hybrid Nodes, billing is based on vCPU-hours of the connected nodes. The pricing is tiered, which rewards scale:

Monthly vCPU-HoursPrice per vCPU-Hour
First 576,000$0.020
Next 576,000$0.014
Next 4,608,000$0.010
Next 5,760,000$0.008
Over 11,520,000$0.006

Billing starts when nodes join the cluster and stops when they're removed. For bare metal with hyperthreading, each physical CPU core reports two vCPUs — billing uses the total reported count.

AWS Organizations consolidated billing lets multiple business units pool their vCPU-hours across accounts to reach lower pricing tiers faster.

Three-business-unit example (all standard support):

Business UnitClustersNodesvCPU/NodeMonthly Cluster CostMonthly Node CostTotal
BU 11108$73.00$1,168.00$1,241.00
BU 2154$73.00$292.00$365.00
BU 31316$73.00$700.80$773.80
Total$219.00$2,233.80$2,379.80

For nodes larger than 32 vCPU per machine, contact the AWS account team.

Hidden Costs That Inflate Your EKS Bill

The control plane fee and compute are the obvious charges. These are the ones that show up on your bill and make you say "wait, where did that come from?"

Cross-AZ Data Transfer

Pod-to-pod traffic crossing Availability Zones is billed at EC2 data transfer rates — $0.01/GB in each direction. This sounds small until you have microservices in different AZs talking to each other at scale.

A service processing 10 TB/month of cross-AZ traffic pays $100/month in data transfer alone. At 100 TB/month, that's $1,000/month — real money that doesn't show up anywhere in basic EKS pricing documentation. For high-throughput inter-service traffic, cross-AZ transfer can represent 20-30% of total cluster cost.

The mitigation: use topology-aware routing to keep pod-to-pod traffic within the same AZ when possible. Karpenter and the Kubernetes topologySpreadConstraints spec make this configurable.

NAT Gateway and IPv4 Address Fees

Pods in private subnets route internet traffic through NAT Gateways. AWS charges $0.045/hr per NAT Gateway and $0.045/GB of data processed. For a production setup with two NAT Gateways processing 500 GB/month of outbound traffic: $65.70 (hourly) + $22.50 (data) = $88.20/month just in NAT costs.

Each public IPv4 address costs $0.005/hr ($3.60/month). A cluster with 10 nodes, one NAT Gateway EIP, and one load balancer EIP has 12 public IPs — $43.20/month in IP address charges.

See Amazon VPC pricing for the complete breakdown of NAT Gateway and IPv4 costs, and NAT Gateway pricing specifics if you want to dig into reduction strategies.

Load Balancer Costs

The AWS Load Balancer Controller creates Application Load Balancers or Network Load Balancers for Kubernetes Service and Ingress resources. Each ALB costs $0.008/hr ($5.84/month) plus Load Balancer Capacity Unit charges based on traffic.

Teams with many small services often create one ALB per service — 10 services = 10 ALBs = $58.40/month in ALB fixed fees alone. Using a single ALB with path-based routing for multiple services is significantly cheaper.

Logging and Monitoring (CloudWatch)

CloudWatch Container Insights provides pod-level metrics and logs — useful, but not free. You pay for:

  • Custom metrics ingested (the Container Insights metrics)
  • Log data ingested via CloudWatch Logs
  • CloudWatch Logs Insights queries

For a medium-sized cluster with detailed logging enabled, CloudWatch costs can easily reach $100-300/month depending on log volume and query frequency. Consider shipping logs to S3 via Kinesis Data Firehose for long-term retention instead of keeping everything in CloudWatch Logs.

Real-World EKS Cost Scenarios (3 Cluster Configurations)

Abstract pricing tables are useful, but what does a real EKS bill look like? Here are three complete monthly breakdowns.

Scenario 1: Startup Dev Cluster

A single developer team running a dev environment. Single AZ (no cross-AZ traffic), no NAT Gateway (pods access internet via public subnets), minimal monitoring.

Line ItemConfigurationMonthly Cost
EKS control planeStandard support, 730 hrs$73.00
EC2 compute3x t3.medium On-Demand ($0.0416/hr each)$91.10
EBS storage3x 20 GB gp3 volumes$7.20
CloudWatch basicMinimal logs~$5.00
Data transferMinimal~$2.00
Total~$178/month

This is close to the theoretical minimum for a functional EKS cluster. No HA, no redundancy, not suitable for production. But it's a real dev cluster budget.

Scenario 2: Production HA Cluster

A production cluster with proper HA, multi-AZ distribution, load balancing, and monitoring.

Line ItemConfigurationMonthly Cost
EKS control planeStandard support, 730 hrs$73.00
EC2 compute8x m5.xlarge On-Demand + 2x Spot (~30% Spot mix)~$940.00
NAT Gateways2x NAT Gateways + ~200 GB/month processed~$100.80
Application Load Balancer1x ALB + moderate traffic~$30.00
EBS storage10x 30 GB gp3 persistent volumes$36.00
Cross-AZ data transfer~200 GB/month cross-AZ$20.00
CloudWatch Container InsightsMedium cluster~$80.00
Public IPv4 addresses~10 IPs~$36.50
Total~$1,316/month

This is a realistic production baseline. Spot instances for stateless workloads bring compute down from ~$1,400 pure On-Demand to ~$940. The control plane fee is less than 6% of total cost here — proof that the $73/month floor matters more at smaller scale.

Scenario 3: Enterprise Cluster with Extended Support

A large cluster with extended support (version not upgraded on time), Provisioned Control Plane, and significant workload footprint.

Line ItemConfigurationMonthly Cost
EKS control planeExtended support, 730 hrs$438.00
Provisioned Control PlaneXL tier, 730 hrs$1,205.00
EC2 compute50x mixed c5.2xlarge/m5.2xlarge (Savings Plans applied)~$7,200.00
NAT Gateways3x NAT Gateways + ~1 TB/month~$180.00
Load Balancers3x ALBs + high traffic~$200.00
EBS storage50x 100 GB gp3 volumes$600.00
Cross-AZ data transfer~2 TB/month$200.00
CloudWatch + third-party loggingLarge cluster~$500.00
EKS Capabilities (Argo CD)100 applications$131.40
Total~$10,654/month

The extended support trap alone costs $365/month extra vs standard support. Upgrading Kubernetes versions on schedule would cut the control plane line items from $1,643 to $73 — a $1,570/month difference from a version bump.

Use the EKS pricing calculator to model your own configuration before committing to a cluster design.

EKS vs ECS: Which Is Cheaper for Your Workload?

This is the question no competitor blog seems willing to answer directly. Here's the honest version.

The fundamental cost difference: ECS has no control plane fee. The $73/month EKS floor doesn't exist in ECS. For small teams running 1-5 nodes, this gap matters. For teams running 20+ nodes, the control plane fee is noise compared to compute costs.

Compute costs are identical: EC2 On-Demand, Spot, and Fargate pricing is the same whether you're running on ECS or EKS. The underlying EC2 instances are the same instances, billed at the same rates. ECS doesn't give you cheaper compute.

The cost crossover: EKS starts to become cost-neutral vs ECS at roughly 3-5 worker nodes — the point where the operational benefits of managed Kubernetes (less time managing infrastructure, better ecosystem tooling) justify the $73/month overhead.

The honest decision framework:

Choose ECS when...Choose EKS when...
Running fewer than 5 nodesTeam already knows Kubernetes
You don't need Helm, Karpenter, or Kubernetes-native toolingYou need Kubernetes-native tooling (Argo CD, Helm, custom operators)
Startup minimizing fixed overheadRunning 5+ nodes where the ecosystem ROI exceeds $73/month
Migrating from Kubernetes and want less complexityYou have multi-team clusters where namespace isolation is needed
Simple web services with standard load balancingWorkloads with complex scheduling requirements

I've seen teams go the other direction too. The NetworkLessons case study is a real example of migrating from Kubernetes to ECS Fargate to reduce complexity and cost. When Kubernetes complexity isn't delivering proportional value, ECS is a legitimate downgrade that saves money.

Check out Amazon ECS pricing for the complete ECS cost breakdown if you're making this comparison.

EKS vs GKE vs AKS: Provider Cost Comparison

If you're evaluating managed Kubernetes across cloud providers, here's how the control plane pricing compares:

ProviderControl Plane CostFree TierSavings Program
Amazon EKS$0.10/hr ($73/month)NoneCompute Savings Plans (up to 66% off)
Google GKE (Zonal)FreeYes — one zonal cluster freeCommitted Use Discounts (up to 57% off)
Google GKE (Regional)$0.10/hr ($73/month)NoneCommitted Use Discounts (up to 57% off)
Azure AKS (Standard)FreeYes — Standard tier is freeAzure Reservations (up to 72% off)
Azure AKS (Premium)$0.10/hr ($73/month)None — adds long-term support featuresAzure Reservations (up to 72% off)

Provider pricing per their respective pricing pages as of March 2026.

For small clusters, EKS is the most expensive on control plane cost alone. AKS Standard is free. GKE zonal clusters are free. For a team running one small cluster, this is a real $73-876/month difference (standard to extended).

At scale, this gap largely disappears. When you're running 20+ nodes, compute pricing dominates and the control plane fee is a rounding error. For equivalent general-purpose instances, EC2 (m5 family), GCE (n2 family), and Azure Dv5 series are within 5-15% of each other depending on region. The committed-use discount programs differ in structure: AWS uses Savings Plans (flexible spend commitments), Google uses Committed Use Discounts (resource commitments), and Azure uses Reservations (capacity reservations). All three offer roughly 40-60% off On-Demand for 1-year commitments.

The real differentiators at scale are ecosystem integration, not cost:

  • EKS wins for deep AWS integration (IAM, VPC, EBS, EFS, ECR all work natively)
  • GKE wins if you want the most hands-off managed Kubernetes experience (Autopilot mode is genuinely impressive)
  • AKS wins for organizations already deep in Microsoft/Azure

How to Reduce EKS Costs (Strategies That Actually Move the Needle)

Not all optimization strategies are created equal. Here's what delivers real savings versus what sounds good but requires disproportionate effort.

Right-sizing pod requests delivers the highest ROI because overprovisioned requests block efficient bin-packing. If every pod requests 4x more CPU than it uses, your nodes can hold 4x fewer pods, requiring 4x more nodes. That multiplies across every compute dollar you spend.

Spot Instances + Karpenter

EC2 Spot offers up to 90% discount vs On-Demand with a 2-minute interruption notice. For stateless, fault-tolerant workloads — batch jobs, CI runners, stateless microservices — this is the single largest cost reduction available.

Karpenter is the preferred autoscaler for Spot. Unlike Cluster Autoscaler, Karpenter:

  • Selects from a wide pool of instance types, reducing interruption risk
  • Supports the price-capacity-optimization allocation strategy (lowest price among instances with available capacity)
  • Automatically consolidates pods onto fewer nodes and terminates underused instances
  • Handles Spot-to-On-Demand failover transparently

The strategy: use Karpenter with diverse instance type configurations. Requesting 20+ compatible instance types means AWS is unlikely to interrupt capacity across all of them simultaneously.

For interruption-sensitive workloads that can't use Spot, Cluster Autoscaler with the scale-down-utilization-threshold parameter still delivers meaningful savings by terminating underused On-Demand nodes.

Savings Plans and Reserved Instances

For your predictable baseline compute — the minimum node count running 24/7 — Compute Savings Plans or EC2 Instance Savings Plans dramatically reduce On-Demand costs.

Purchase OptionMax DiscountCommitment
Compute Savings PlansUp to 66%1 or 3 year spend commitment (flexible instance types)
EC2 Instance Savings PlansUp to 72%1 or 3 year spend commitment (specific family + region)
Reserved InstancesUp to 72%1 or 3 year (specific type, optional capacity guarantee)
Fargate + Compute Savings PlansUp to 50%Compute SP covers Fargate automatically

The standard strategy: Compute Savings Plans for your predictable "floor" capacity + Spot instances for burst above that floor. This combination typically delivers 50-70% compute savings vs pure On-Demand.

Important nuance: Compute Savings Plans apply to the underlying EC2 instance cost, but the EKS Auto Mode management fee is NOT discounted. If you're running Auto Mode, your Savings Plans discount applies to the EC2 portion but not the ~12% management surcharge.

Graviton (ARM64) Instances

AWS Graviton processors offer better price-performance than equivalent x86 instances for cloud-native workloads. Fargate ARM pricing is about 20% cheaper than x86 in us-east-1. For EC2 node groups, Graviton instances generally offer 10-20% better price-performance for comparable workloads.

The requirement: your container images need to be built for linux/arm64 or use multi-arch manifests. Most common container images now publish multi-arch versions. If you're building images yourself, adding --platform linux/amd64,linux/arm64 to your Docker build command is usually sufficient.

Mixed architecture clusters work well: route CPU-bound stateless services to Graviton nodes, keep x86-only dependencies on x86 nodes. Karpenter and node selectors/affinity rules handle the routing.

Right-Sizing Pod Resource Requests

This is the highest-leverage optimization because it compounds through your entire compute cost. A pod with a 500m CPU request prevents other pods from scheduling into that CPU slot even if the actual usage is 50m. At cluster scale, this means running 2-5x more nodes than you need.

Tools:

  • Goldilocks: Runs VPA in recommendation mode and provides a dashboard showing right-sized CPU/memory requests per workload
  • KRR (Kubernetes Resource Recommender): CLI tool that analyzes Prometheus metrics and outputs right-sizing recommendations
  • Kubecost: Provides right-sizing recommendations with cost impact estimates
  • VPA (Vertical Pod Autoscaler): Run in "Off" mode to generate recommendations without automatic restarts

The workflow: install Goldilocks or VPA in audit mode → review recommendations → update Deployment resource requests → observe node count reduction. Most teams find request overprovisioning of 2-5x is common, especially on services that were sized by intuition rather than measurement.

Multi-Tenancy and Non-Production Scale-Down

Multi-cluster proliferation is a common EKS cost mistake. Each team gets their own cluster = each team pays $73/month in control plane fees. Five teams × five clusters (dev/prod each) = $730/month in control plane fees alone.

Multi-tenancy via namespaces is the alternative: one shared cluster with RBAC and NetworkPolicies per team. You pay $73/month once instead of $730/month. The trade-off is reduced blast-radius isolation — a misconfigured workload in one namespace can affect other tenants. For teams with mature RBAC practices, the cost savings justify shared clusters.

For non-production clusters (dev, staging, test), schedule nodes to scale to zero overnight and on weekends. Pods aren't running, nodes shouldn't be either. Kubecost's cluster-turndown controller automates this scheduling. Dev clusters scaled down to zero for 14 hours/day + weekends run at roughly 35% of their always-on cost.

EKS Cost Visibility: Seeing Costs at the Pod Level

Knowing that your cluster costs $1,400/month is useful. Knowing which namespace, deployment, or team is responsible for which portion is how you actually drive reduction.

AWS offers native split cost allocation — a feature that allocates EC2 compute costs down to individual pods and namespaces in Cost and Usage Reports (CUR). Available Kubernetes cost dimensions:

  • Cluster name
  • Deployment
  • Namespace
  • Node
  • Workload name and type
  • Up to 50 user-defined Kubernetes labels per pod (imported as cost allocation tags)

The Kubernetes label import is the part most teams don't know exists. You can tag pods with team, product, environment, cost-center labels and those labels become cost allocation dimensions in Cost Explorer. Chargebacks to engineering teams, product lines, or environments become possible without third-party tooling.

Two allocation modes:

  1. Resource requests: Allocates costs based on pod CPU and memory requests (simpler, less accurate if requests are overprovisioned)
  2. Amazon Managed Service for Prometheus: Allocates based on the higher of requests and actual utilization (more accurate, requires AMP setup)

AWS uses a 9:1 CPU-to-memory cost ratio based on Fargate prices to allocate instance costs across pods. Split cost data appears in CUR within 24 hours and is visible in Cost Explorer with Cost Categories and Anomaly Detection support.

Kubecost is the alternative for teams who want dashboards without CUR setup. A free Kubecost subscription is available on AWS Marketplace. It provides granular cost breakdowns by namespace, deployment, pod, and node, plus right-sizing recommendations and the cluster-turndown controller for non-production scale-down.

CloudWatch Container Insights (billed as standard CloudWatch usage) provides CPU, memory, disk, and network metrics at node and pod level — useful for identifying underutilized resources without third-party tooling, though it adds to your CloudWatch costs.

Deploying EKS Cost-Consciously with CDK

If you're provisioning EKS through code, your cost decisions live in your CDK stack. The aws-eks-v2 module (currently alpha) is the recommended module for new projects — it uses native CloudFormation L1 resources and defaults to Auto Mode.

Worth knowing before you use the defaults: Auto Mode is the default capacity type in aws-eks-v2. That means a minimal cluster definition will use Auto Mode and add the ~12% management surcharge discussed earlier. If you want explicit control over that, set it explicitly.

For AWS CDK best practices in general, that guide covers project structure and patterns worth following before scaling out your EKS CDK definitions.

Minimal cluster using Auto Mode (v2 default — includes the ~12% surcharge):

import * as eks from 'aws-cdk-lib/aws-eks-v2-alpha';

const cluster = new eks.Cluster(this, 'HelloEKS', {
  version: eks.KubernetesVersion.V1_34,
  // Auto Mode is the default in aws-eks-v2
  // Note: adds ~12% management fee on top of EC2 instance costs
});

Explicit Auto Mode with node pools:

const cluster = new eks.Cluster(this, 'EksAutoCluster', {
  version: eks.KubernetesVersion.V1_34,
  defaultCapacityType: eks.DefaultCapacityType.AUTOMODE,
  compute: {
    nodePools: ['system', 'general-purpose'],
  },
});

Managed node groups with Spot (cost-controlled, no Auto Mode surcharge):

const cluster = new eks.Cluster(this, 'EksCluster', {
  version: eks.KubernetesVersion.V1_34,
  defaultCapacityType: eks.DefaultCapacityType.NODEGROUP,
  defaultCapacity: 0, // Define node groups explicitly for cost control
});

// Spot node group for non-critical workloads — up to 90% savings vs On-Demand
cluster.addNodegroupCapacity('spot-nodes', {
  minSize: 1,
  maxSize: 10,
  // Multiple instance types reduce Spot interruption risk
  instanceTypes: [
    ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.XLARGE),
    ec2.InstanceType.of(ec2.InstanceClass.M5A, ec2.InstanceSize.XLARGE),
    ec2.InstanceType.of(ec2.InstanceClass.M4, ec2.InstanceSize.XLARGE),
  ],
  capacityType: eks.CapacityType.SPOT,
});

Fargate cluster (no EC2 management, pay per pod vCPU/memory):

const cluster = new eks.FargateCluster(this, 'FargateCluster', {
  version: eks.KubernetesVersion.V1_34,
});

Older aws-eks v1 module with multiple instance types (widely used):

import * as eks from 'aws-cdk-lib/aws-eks';
import { KubectlV35Layer } from '@aws-cdk/lambda-layer-kubectl-v35';

const vpc = new ec2.Vpc(this, 'EKSVpc');

const cluster = new eks.Cluster(this, 'EKSCluster', {
  vpc,
  defaultCapacity: 0,
  version: eks.KubernetesVersion.V1_32,
  kubectlLayer: new KubectlV35Layer(this, 'kubectl'),
  ipFamily: eks.IpFamily.IP_V4,
});

// Cost-optimized: multiple instance types for Spot availability,
// autoscaling from 1 to 10, AL2023 AMI
cluster.addNodegroupCapacity('mixed-nodes', {
  amiType: eks.NodegroupAmiType.AL2023_X86_64_STANDARD,
  instanceTypes: [
    new ec2.InstanceType('m5.large'),
    new ec2.InstanceType('m5a.large'),
    new ec2.InstanceType('m4.large'),
  ],
  desiredSize: 2,
  minSize: 1,
  maxSize: 10,
  diskSize: 20,
});

For teams choosing between CDK and Terraform for EKS provisioning, the AWS CDK vs Terraform comparison covers the trade-offs in detail.

Key Takeaways

EKS pricing has more dimensions than most guides cover. Here's what actually matters:

  1. The floor is $73/month per cluster — before nodes, before pods. For multi-cluster environments, this multiplies fast.
  2. The extended support trap is real — $0.10/hr to $0.60/hr is a 6x jump. Upgrade Kubernetes versions quarterly to avoid it.
  3. Auto Mode adds ~12% on your EC2 costs — and Savings Plans don't discount that surcharge, only the underlying instance cost.
  4. EKS Capabilities (Argo CD, ACK, KRO) cost real money — $131-211/month for a moderate deployment, on top of everything else.
  5. Compute overprovisioning is the largest hidden cost — right-size pod requests first before optimizing anything else.
  6. For small teams running 1-5 nodes, ECS is frequently cheaper — ECS has no control plane fee and the same compute economics.

The most effective optimization sequence: right-size pod requests (fixes bin-packing), add Spot via Karpenter (up to 90% compute savings), buy Compute Savings Plans for your baseline (up to 66%), and stay current on Kubernetes versions (avoids the extended support cliff).

Shift-Left Your FinOps Practice

Move cost awareness from monthly bill reviews to code review. CloudBurn shows AWS cost impact in every PR, empowering developers to make informed infrastructure decisions.

Frequently Asked Questions

How much does Amazon EKS cost per month?
EKS costs at minimum $73/month per cluster (the control plane fee at $0.10/hr). A realistic dev cluster with 3 small EC2 nodes runs around $150-200/month. A production HA cluster with 10 m5.xlarge nodes, NAT Gateways, and a load balancer typically runs $1,200-1,600/month. Enterprise clusters with 50+ nodes can exceed $10,000/month.
Is Amazon EKS free?
No. Unlike ECS, EKS charges $0.10/hr ($73/month) per cluster for the managed control plane, regardless of whether any workloads are running. There is no free tier. The EC2 instances and Fargate pods running your workloads are billed separately on top of this base fee.
What happens to my EKS bill when my cluster enters extended support?
The control plane fee jumps from $0.10/hr to $0.60/hr — a 6x increase. That means $438/month per cluster instead of $73/month. Extended support kicks in automatically 14 months after a Kubernetes version is released in EKS. Staying current on version upgrades is the only way to avoid this surcharge.
What does EKS Auto Mode cost and is it worth it?
EKS Auto Mode adds a management fee per EC2 instance managed, approximately 12% of the On-Demand instance cost. Savings Plans and Reserved Instances do not discount this management fee. Auto Mode is worth the surcharge if your team lacks Kubernetes node management expertise or wants to eliminate operational overhead — the time saved on cluster operations can easily outweigh $125/month in management fees for a typical production cluster.
Is EKS more expensive than ECS?
For small clusters (1-5 nodes), ECS is typically cheaper because ECS has no control plane fee. EKS charges $73/month per cluster before a single node runs. For larger clusters (5+ nodes), the $73/month gap is less significant and EKS starts delivering ROI through the Kubernetes ecosystem (Karpenter, Argo CD, Helm operators, etc.). Compute pricing is identical between ECS and EKS for EC2 and Fargate workloads.
Do AWS Savings Plans apply to EKS?
Yes, with one important exception. Compute Savings Plans (up to 66% off) and EC2 Instance Savings Plans (up to 72% off) apply to EC2 worker nodes. Compute Savings Plans also apply to Fargate pods on EKS (up to 50% off). The exception: the EKS Auto Mode management fee is NOT discounted by Savings Plans — only the underlying EC2 instance cost is reduced.
What are EKS Capabilities and how much do they cost?
EKS Capabilities are managed integrations for Argo CD, ACK (AWS Controllers for Kubernetes), and KRO (Kubernetes Resource Orchestrator). Each has a base hourly rate plus a per-resource-hour charge. Running Argo CD with 100 applications costs about $131/month. ACK and KRO each cost about $40/month at 1,000 managed resources. Running all three simultaneously with moderate usage costs approximately $211/month on top of your regular cluster costs.

Share this article on ↓

Subscribe to our Newsletter