Most ECR cost guides tell you it's $0.10 per GB for storage. That's true. What they skip is the NAT Gateway data processing charges that can easily exceed your storage costs, the Amazon Inspector line items that appear nowhere near "ECR" on your bill, and the storage accumulation that silently grows every time your CI/CD pipeline runs.
Amazon ECR pricing has six cost dimensions, and three of them don't appear under the ECR line item on your AWS bill.
By the end of this guide, you'll know exactly what your team is paying for ECR, where to find each charge, and which optimizations give the best cost-per-effort return. I've audited ECR costs for teams ranging from five-person startups to platform orgs running 200 microservices, and the pattern is consistent: teams that think they're paying $20/month are often paying $60-80 once you count NAT Gateway processing and Inspector scanning. For a quick estimate of your specific situation, use the ECR pricing calculator.
All pricing below reflects US East (N. Virginia) rates per the Amazon ECR pricing page as of March 2026 unless noted otherwise.
ECR Pricing at a Glance (Quick Reference)
Amazon ECR pricing has three primary billing dimensions: storage, data transfer out, and optional features. A few things that don't cost anything from ECR itself: pushing images into ECR, API calls (DescribeImages, CreateRepository, GetAuthorizationToken), and same-region pulls from AWS compute to your repository.
That last point needs one qualifier: same-region ECR transfer is free, but same-region pulls can still incur NAT Gateway processing charges if the traffic crosses a NAT Gateway. For the dominant production pattern (ECS or EKS pulling from ECR in the same region without NAT in the path), your only recurring cost is storage.
Private Repository Pricing
Private repositories use IAM-based access control and are what most teams work with for production workloads.
| Charge | Rate |
|---|---|
| Storage | $0.10 per GB per month (compressed) |
| Same-region transfer | $0.00 (ECS, Fargate, EKS, Lambda, App Runner) |
| Cross-region transfer | $0.09 per GB |
| Internet (first 1 GB/month) | $0.00 |
| Internet (up to 10 TB/month) | $0.09 per GB |
| Internet (next 40 TB/month) | $0.085 per GB |
| Internet (next 100 TB/month) | $0.07 per GB |
| Internet (over 150 TB/month) | $0.05 per GB |
| Push operations (data in) | $0.00 |
| API calls | $0.00 |
Free tier: 500 MB/month for the first 12 months (new accounts only). This expires after the first year. From July 15, 2025, new AWS accounts also receive up to $200 in free tier credits valid for 12 months.
Public Repository Pricing and Free Tier
Public repositories (ECR Public) use the same $0.10/GB storage rate, but the data transfer model is different and more generous.
| Scenario | Rate |
|---|---|
| Storage | $0.10 per GB per month |
| Anonymous transfer to internet (first 500 GB/month) | $0.00 |
| Authenticated transfer to internet (first 5 TB/month) | $0.00 |
| Transfer to AWS compute in any region | $0.00 (unlimited) |
| Beyond free limits | Billed to the downloading account |
The "downloading account pays" model for public repositories matters if you're distributing images broadly. For open-source projects, the 500 GB/month anonymous free tier and unlimited AWS compute transfer cover most scenarios. Public repository storage free tier is 50 GB/month and never expires.
How ECR Calculates Your Storage Bill
Here's what trips up almost every team when they first look at their ECR bill: ECR doesn't bill per image. It bills for unique compressed image layers. Understanding this difference explains why your bill looks lower (or higher) than you'd expect from a simple (number of images) x (image size) calculation.
A Docker image is a stack of layers. When you build app-v2 from the same base image as app-v1, the shared base layers aren't stored twice. ECR recognizes that the layer digest already exists and bills for the layer once, regardless of how many images reference it.
Layer Deduplication: Why 100 Images May Not Cost What You Think
Here's a concrete example. Suppose you have two images:
app-v1: 2 GB total (1.8 GB ubuntu:22.04 base + 200 MB application code)app-v2: 2.1 GB total (same 1.8 GB base + 300 MB updated application code)
Naive calculation: 2 GB + 2.1 GB = 4.1 GB stored, $0.41/month.
Actual ECR bill: 1.8 GB (shared base, billed once) + 200 MB (v1 app layer) + 300 MB (v2 app layer) = 2.3 GB, $0.23/month.
Now scale that to 50 microservices all built from the same Python 3.12 base image. You might expect 50 x 1.5 GB = 75 GB of storage. The actual billed storage could be 1.5 GB (base, once) + 50 x 50 MB (app code) = 4 GB. That's a significant difference.
Two things to know about compressed layer storage:
- ECR bills on the compressed size (what's actually stored), not the uncompressed image size shown by
docker imageson your local machine. A 2 GB uncompressed image might be 800 MB compressed. - To see actual billed storage per repository, use:
aws ecr describe-images --repository-name <name> --query 'imageDetails[*].imageSizeInBytes'and sum the results. That's the bytes you're paying for.
Cross-Repository Layer Sharing (January 2026)
In January 2026, ECR introduced cross-repository layer sharing (also called blob mounting). Previously, layer deduplication worked within a single repository. Now, shared layers are stored once across multiple repositories in the same registry.
For teams running many microservices with a shared base image, this is significant. If you have 50 repositories each containing images built from the same Python or Node.js base, that base layer is stored once rather than per-repository. AWS confirmed via the January 2026 announcement that common layers are stored once across repositories, reducing storage costs for teams with shared base images. Verify the billing mechanics for shared layers in your account configuration.
A secondary benefit: pushes are faster. When you push an image to a new repository that shares layers with existing ones, ECR skips re-uploading those layers.
Data Transfer Costs: What's Free and What Isn't
The most important thing to understand about ECR data transfer: the dominant production pattern (same-region pulls to AWS compute) costs nothing. ECR data transfer pricing only becomes a consideration when images cross region boundaries or leave the AWS network entirely.
Here's what "same region" covers. ECS on EC2, ECS on Fargate, EKS node groups, Lambda, and App Runner - any of these services pulling from an ECR repository in the same region incur $0.00 in data transfer fees. Push operations are also always free.
Same-Region Pulls: Always Free
This covers the vast majority of production workloads. If your CI/CD pipeline builds in us-east-1 and your ECS cluster runs in us-east-1, the only cost you're paying is storage.
The important caveat: "free" applies to direct pulls from ECR. If your compute resources sit in private subnets and use a NAT Gateway as their internet route, those pulls aren't going directly to ECR - they're going through the NAT Gateway, which adds its own data processing charges. More on this in the hidden costs section below.
Cross-Region and Internet Transfer Rates
Cross-region pulls are charged at the internet data transfer rate: $0.09 per GB from most US regions. This is a flat rate regardless of volume (unlike internet transfer, which has tiered pricing at higher volumes).
Internet transfer (pulling from outside AWS, like a developer's laptop) uses the tiered rates shown in the quick reference table: first 1 GB/month free, then $0.09/GB up to 10 TB, stepping down to $0.05/GB over 150 TB.
The practical advice here: don't rely on cross-region pulls for production workloads. At any real pull frequency, the per-GB charges add up fast.
Cross-Region Replication Cost Mechanics
ECR Cross-Region Replication solves the cross-region pull cost problem by copying images to destination regions upfront. You pay for replication once, then pulls in each region are free.
Replication cost math: data transfer out charges apply at the source region's rate ($0.09/GB) when images are copied. The destination account is then charged for storage in each destination region ($0.10/GB/month).
Example from the research: 50 GB of images replicated to two additional regions:
- Data transfer out: 2 regions x 50 GB x $0.09 = $9.00
- Storage in 3 regions: 3 x 50 GB x $0.10 = $15.00/month
- Total: $24/month
Compare this to cross-region pulls instead: 50 GB x $0.09 = $4.50 per pull event. If your workloads pull images more than five times per month (which any active deployment does), replication is cheaper. And that's before accounting for latency improvements from local pulls.
For cross-account replication, the destination account must configure a registry permissions policy to allow the source to write to it. Full setup details are in the ECR replication documentation.
The Hidden ECR Costs That Don't Show Up Under ECR
This is the section that most ECR cost guides skip. Three cost categories related to ECR activity appear under completely different AWS services on your bill.
If you look at your AWS Cost Explorer, filter by "Amazon Elastic Container Registry," and the number doesn't match your storage math, the delta is almost certainly in one of these categories: NAT Gateway data processing (under Amazon Virtual Private Cloud), enhanced image scanning (under Amazon Inspector), or encryption operations (under AWS Key Management Service).
NAT Gateway Data Processing ($0.045/GB) - The Biggest Surprise
Here's the architecture that creates this cost. Your ECS or EKS workloads run in private subnets. Private subnets don't have direct internet access. To reach ECR (which is accessed via public endpoints by default), traffic routes through a NAT Gateway. NAT Gateway charges $0.045 per GB of data processed - in both directions.
So every image pull from ECR adds: the ECR storage cost (already paid) + $0.045/GB NAT processing. For a team pulling 500 GB of images per month, that's $22.50/month in NAT charges that appears under "Amazon Virtual Private Cloud" in Cost Explorer, not under ECR.
To diagnose this: open Cost Explorer, filter by Service = "Amazon Virtual Private Cloud," group by "Usage Type," and look for "DataProcessing-Bytes" charges. If you see consistent charges that correlate with your deployment frequency, you have this problem.
The EKS cost optimization best practices guide explicitly calls this out and recommends VPC endpoints as the solution. I'll cover implementation in the optimization section. You can estimate your specific exposure with the NAT Gateway pricing calculator, and the Amazon VPC pricing guide covers the full NAT Gateway cost model.
Amazon Inspector Enhanced Scanning
ECR has two scanning modes. Basic scanning is free: it uses AWS native technology to scan for OS-level CVEs on push or on demand, and it costs nothing.
Enhanced scanning is different. It integrates with Amazon Inspector and provides deeper vulnerability analysis including programming language package vulnerabilities. It's genuinely more useful. But the charges appear under Amazon Inspector in Cost Explorer, not ECR, and they can be significant.
Amazon Inspector pricing for ECR scanning (us-east-1):
| Scan Type | Rate |
|---|---|
| Initial scan when image is pushed | $0.09 per image |
| Rescan (continual mode, triggered by CVE updates) | $0.01 per rescan |
| On-demand scan in CI/CD tools | $0.03 per image |
The rescan charge is where costs compound. Continual scanning mode rescans all retained images every time the vulnerability database is updated. For a registry with 1,500 images rescanned 15 times per month:
- 1,000 new images scanned: 1,000 x $0.09 = $90.00
- 1,500 images x 15 rescans: 1,500 x 15 x $0.01 = $225.00
- Total Amazon Inspector bill: $315.00/month
Switch to on-push scanning (no rescans), same 1,000 new images: $90.00/month. Same vulnerability coverage at initial push, no rescan charges.
All new Amazon Inspector accounts get a 15-day free trial covering all ECR scans. For on-demand CI/CD scanning, 25 image assessments per account are provided free (one-time).
Managed Signing (AWS Signer)
ECR introduced managed container image signing in November 2025. When enabled, ECR automatically generates cryptographic signatures (using the Notation format) each time an image is pushed. AWS Signer handles key material and certificate lifecycle management.
Managed signing charges appear under AWS Signer, not ECR. The AWS Signer pricing page has the current per-signature rates - verify the exact amount at the ECR pricing page before enabling this for high-push-frequency pipelines. This is worth doing for teams with supply chain security requirements (SLSA attestations, Sigstore-equivalent verification), but the signing cost scales with push frequency.
All signing operations are logged in CloudTrail, which adds CloudTrail cost if you're not already paying for events.
KMS Encryption Charges
ECR defaults to AES-256 encryption using Amazon S3-managed keys. This costs nothing extra.
Enabling KMS encryption (using an AWS managed key or customer-managed key) adds charges: $0.03 per 10,000 KMS API calls. Each image pull triggers a decrypt call. For repositories pulled frequently, this adds up: 100,000 pulls per month = $0.30/month in KMS charges per repository. Not large on its own, but notable for high-traffic repositories.
The honest trade-off: KMS encryption adds key audit trails and rotation control. For most teams, default S3-managed encryption is sufficient. Enable KMS when you have specific compliance requirements for key management, and factor in the per-call cost before enabling it broadly.
Real-World Cost Examples
Abstract pricing tables are useful for reference. But the question I hear most often is "what will this actually cost my team?" Here are three scenarios with concrete numbers.
Small Team: Single-Region CI/CD (ECS Fargate)
Setup: CodeBuild builds images, pushes to ECR, ECS Fargate pulls in the same region. Classic single-region deployment.
Cost breakdown:
- Storage: 10 repositories x 10 images retained x 500 MB average = 50 GB x $0.10 = $5.00/month
- Data transfer: $0.00 (same-region Fargate pulls)
- Total: $5.00/month
This assumes lifecycle policies are in place keeping the last 10 images per repository. Without them: 10 repos x 10 builds/day x 500 MB = 50 GB of new storage per day. After 30 days, that's 1.5 TB stored at $150/month. The lifecycle policy is doing $145/month of work here.
This also assumes no NAT Gateway in the traffic path. If Fargate is in private subnets behind a NAT Gateway (common), add $0.045/GB for every image pull.
Growing Team: Multi-Region Active-Active Deployment
Setup: Source repository in us-east-1, Cross-Region Replication to eu-west-1 and ap-southeast-1. Workloads in each region pull locally.
Cost breakdown:
- Replication transfer: 2 destination regions x 50 GB x $0.09 = $9.00 (one-time per image set)
- Storage in us-east-1: 50 GB x $0.10 = $5.00/month
- Storage in eu-west-1: 50 GB x $0.10 = $5.00/month
- Storage in ap-southeast-1: 50 GB x $0.10 = $5.00/month
- Total: $24/month (after initial replication)
For context: if you skipped replication and did cross-region pulls instead, you'd pay $0.09/GB per pull event. For any active workload pulling images more than a few times per month, replication is cheaper within weeks.
If you need help comparing Amazon ECS pricing for the compute side of this architecture, there's a separate guide covering Fargate and EC2 launch type costs.
Large Organization: The Accumulation Risk
This is the slow-burn problem I see most often in mature accounts.
Setup: Dozens of repositories, active CI/CD pipelines, no lifecycle policies.
The math: 20 repositories x 10 builds/day x 500 MB average image size = 100 GB of new storage added per month. At $0.10/GB:
- Month 1: $10/month in new storage
- Month 6: $60/month
- Month 12: $120/month (just for the images pushed that year, not counting earlier accumulation)
A team that's been building for two years without lifecycle policies can easily have 1-2 TB in ECR that nobody's looked at. At $100-200/month, that's a meaningful line item for something that's mostly forgotten images from deprecated services.
The fix is straightforward once you know about it. The problem is that it's invisible until someone looks at Cost Explorer and notices the ECR line item has been climbing for 18 months.
How to Reduce Your ECR Costs
Here's my ranking of ECR cost optimizations by impact-to-effort ratio. Lifecycle policies are the clear winner: the highest impact, and you can implement them in an afternoon.
Lifecycle Policies (Highest Impact)
Without lifecycle policies, ECR storage grows with every image push - forever. With them, you define rules that automatically expire old images.
From the research example: a team storing 200 GB with no cleanup configured drops to 50 GB after implementing a policy keeping the last 10 images per repository. That's $15/month in storage savings, and the savings compound as the team keeps building.
The Well-Architected Container Build Lens explicitly recommends lifecycle policies as the primary cost control for ECR. Here's the two-rule pattern I recommend for most repositories:
{
"rules": [
{
"rulePriority": 1,
"description": "Keep last 10 images",
"selection": {
"tagStatus": "tagged",
"tagPatternList": ["*"],
"countType": "imageCountMoreThan",
"countNumber": 10
},
"action": { "type": "expire" }
},
{
"rulePriority": 2,
"description": "Delete untagged images after 1 day",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 1
},
"action": { "type": "expire" }
}
]
}
Rule priority matters: lower numbers run first. The untagged cleanup runs second here because untagged images might be intermediate build artifacts - cleaning them up after one day is aggressive but appropriate for most teams.
Before applying lifecycle policies in production: use the console's dry-run preview feature to see which images would be expired before committing. The ECR console shows a preview of what each policy would delete.
One limit to be aware of: lifecycle policies support a maximum of 50 rules per repository.
VPC Endpoints to Eliminate NAT Gateway Tax
If your ECS, EKS, or Lambda workloads run in private subnets behind a NAT Gateway, every image pull costs $0.045/GB in NAT data processing charges on top of any ECR transfer costs.
VPC endpoints (AWS PrivateLink) create a private route from your VPC to ECR, bypassing the NAT Gateway entirely. You need three endpoints:
com.amazonaws.[region].ecr.dkr- Docker Registry API (push/pull operations)com.amazonaws.[region].ecr.api- ECR management API (DescribeImages, CreateRepository, etc.)- S3 Gateway endpoint - ECR stores image layers in S3, and the S3 gateway endpoint is free
The interface endpoints (ecr.dkr and ecr.api) cost $0.01/hour per AZ plus $0.01/GB processed. For a two-AZ deployment, that's approximately $14.60/month in endpoint costs. If you're currently routing 500+ GB/month of image pulls through a NAT Gateway ($22.50/month in NAT processing alone), the endpoints pay for themselves.
Security group requirement: the endpoint security group must allow inbound HTTPS (port 443) from the private subnet CIDR blocks where your workloads run.
One caveat: pull-through cache rules still require internet access for the first pull from an upstream registry. If you're caching Docker Hub images via pull-through cache, the NAT Gateway is still needed for that initial population. It won't be needed for subsequent pulls of the same image, but don't remove your NAT Gateway if you rely on pull-through cache without understanding this. Full configuration steps are in the ECR VPC endpoint documentation.
For the NAT Gateway pricing context and how to calculate your current NAT Gateway spend, there's a dedicated guide covering the full NAT Gateway cost model.
ECR Archive for Compliance-Heavy Repositories
ECR introduced an archive storage class in November 2025. Archived images can't be pulled until restored (restoration takes under 20 minutes), but they don't count toward the per-repository image limit and are stored at a reduced rate compared to standard storage.
Check the current archive storage rate at the ECR pricing page - the rate may have been updated since this guide was published.
This is specifically useful for teams with compliance requirements to retain images for 1-3 years but no operational need to pull them. Instead of paying full storage rates for hundreds of old images, archive them.
Important caveat: archived images have a 90-day minimum storage duration. If you restore and delete an image before 90 days, you're charged for the full 90 days. Plan your archival strategy accordingly - don't archive images you might need to roll back to within the next few months.
Lifecycle policies can automate archival based on last pull time (sinceImagePulled), which is useful for gradually moving your backlog of untouched images to archive storage.
Pull-Through Cache for External Images
Pull-through cache rules let ECR cache images from upstream registries: Docker Hub, ECR Public, the Kubernetes registry, Quay, Azure Container Registry, GitHub Container Registry, and GitLab Container Registry.
The primary reason to use this is Docker Hub rate limits: Docker Hub throttles unauthenticated pulls (100 per 6 hours per IP for public images), and CI/CD systems hitting those limits during peak build periods cause failed builds. By routing Docker Hub pulls through ECR pull-through cache, you get unlimited pulls from your cached copy.
In March 2025, AWS added ECR-to-ECR pull-through cache, letting you automatically sync images between ECR private registries across accounts and regions. This replaces the manual pattern of maintaining copies in every region.
Cached images count toward ECR storage ($0.10/GB/month), but you control lifecycle policies on them. If you're caching large upstream images, apply lifecycle policies to prevent the cache from growing indefinitely.
Combine pull-through cache with VPC endpoints for pull-through cache access from private subnets. The first pull of any image from an upstream registry still requires internet access (the NAT Gateway), but subsequent pulls are served locally.
Minimize Image Size
Smaller images mean lower storage costs and faster pulls. At $0.10/GB/month, a 200 MB image stored for a year costs $0.24. A 2 GB image costs $2.40. Multiply by the number of images retained and the difference adds up.
The AWS Container Build Lens and ECR performance documentation recommend:
- Alpine and Distroless base images: Alpine is typically under 10 MB compressed. Ubuntu is 30-75 MB. Debian full images are larger. For many applications, you don't need a full OS in your container.
- Multi-stage builds: Use a build stage with compilers and package managers, copy only the compiled artifacts to a minimal runtime stage. The final image doesn't include build tooling.
- Chain RUN instructions: Each
RUNcommand creates a new layer.apt-get update && apt-get install -y package && apt-get cleanin oneRUNcreates one layer. Splitting into threeRUNcommands creates three layers and retains the intermediate package cache in layers you can't clean. - Dependency layer ordering: Place dependencies (package installs) before application code in the Dockerfile. Docker's layer cache reuses unchanged layers from previous builds. If you put
COPY . .beforeRUN npm install, every code change invalidates the npm install layer.
Windows images carry particular overhead: mcr.microsoft.com/windows/servercore is approximately 1.7 GB compressed in ECR. If you're running Windows containers, image size optimization is even more impactful than for Linux workloads.
Implementing ECR Cost Controls with AWS CDK
The most durable way to apply these optimizations is to codify them in your IaC so every new repository gets lifecycle policies by default. The alternative - relying on manual enforcement - breaks down as teams grow and repositories proliferate.
The aws-cdk-lib/aws-ecr module gives you the Repository construct with lifecycle rules built in. Here's what a production-ready repository definition looks like.
Repository with Lifecycle Policy (TypeScript)
import * as ecr from 'aws-cdk-lib/aws-ecr';
import * as iam from 'aws-cdk-lib/aws-iam';
import { Duration, RemovalPolicy } from 'aws-cdk-lib';
const repository = new ecr.Repository(this, 'AppRepository', {
repositoryName: 'my-app',
// Basic scanning on push - free, catches OS-level CVEs
imageScanOnPush: true,
// Prevents accidental tag overwriting and reduces untagged accumulation
imageTagMutability: ecr.TagMutability.IMMUTABLE,
// Note: enabling KMS encryption adds $0.03 per 10,000 API calls
// encryption: ecr.RepositoryEncryption.KMS,
// Clean up on stack deletion to avoid orphaned images
removalPolicy: RemovalPolicy.DESTROY,
emptyOnDelete: true,
});
// Keep only last 10 images - this is the highest-impact cost control
repository.addLifecycleRule({
maxImageCount: 10,
description: 'Keep last 10 images',
});
// Delete untagged images after 1 day
repository.addLifecycleRule({
tagStatus: ecr.TagStatus.UNTAGGED,
maxImageAge: Duration.days(1),
description: 'Remove untagged images',
});
// Grant pull-only access to ECS task execution role
const taskExecutionRole = new iam.Role(this, 'TaskExecutionRole', {
assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
});
repository.grantPull(taskExecutionRole);
The imageTagMutability: ecr.TagMutability.IMMUTABLE flag prevents overwriting existing tags. This is both a cost control (prevents a latest push from creating an untagged orphan of the previous latest) and a security control (prevents tag substitution attacks).
If you need flexibility for dev and test tags while keeping production tags immutable, CDK v2.239.0 added IMMUTABLE_WITH_EXCLUSION:
new ecr.Repository(this, 'AppRepository', {
imageTagMutability: ecr.TagMutability.IMMUTABLE_WITH_EXCLUSION,
imageTagMutabilityExclusionFilters: [
ecr.ImageTagMutabilityExclusionFilter.wildcard('dev-*'),
ecr.ImageTagMutabilityExclusionFilter.wildcard('test-*'),
],
});
For the difference between ECS task role and task execution role when granting ECR access, this ECS task role vs execution role guide covers the distinction in detail.
Repository with Lifecycle Policy (Python)
import aws_cdk.aws_ecr as ecr
import aws_cdk.aws_iam as iam
from aws_cdk import Duration, RemovalPolicy
repository = ecr.Repository(self, "AppRepository",
repository_name="my-app",
# Basic scanning on push - free
image_scan_on_push=True,
# Immutable tags prevent overwriting and reduce orphaned images
image_tag_mutability=ecr.TagMutability.IMMUTABLE,
removal_policy=RemovalPolicy.DESTROY,
empty_on_delete=True,
)
# Keep only last 10 images
repository.add_lifecycle_rule(
max_image_count=10,
description="Keep last 10 images",
)
# Delete untagged images after 1 day
repository.add_lifecycle_rule(
tag_status=ecr.TagStatus.UNTAGGED,
max_image_age=Duration.days(1),
description="Remove untagged images",
)
# Grant pull access to ECS task execution role
task_execution_role = iam.Role(self, "TaskExecutionRole",
assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
)
repository.grant_pull(task_execution_role)
One thing I pay attention to when reviewing CDK stacks: teams often enable KMS encryption (encryption: ecr.RepositoryEncryption.KMS) as a blanket "security good practice" without considering the per-call cost for high-pull-frequency repositories. The CloudBurn PR review workflow catches this - it flags when a CDK diff enables enhanced scanning or KMS encryption so you can make an informed decision before deploying, rather than discovering the cost on your next bill. For the broader workflow of catching infrastructure cost decisions in code review, that guide covers how to build cost review into your PR process. If you want to estimate your CDK infrastructure costs before deployment, there are four practical methods that work with this kind of infrastructure change.
Shift-Left Your FinOps Practice
Move cost awareness from monthly bill reviews to code review. CloudBurn shows AWS cost impact in every PR, empowering developers to make informed infrastructure decisions.
ECR vs. Docker Hub vs. GHCR: When ECR Makes Economic Sense
The right registry depends on your stack. For AWS-native workloads, ECR usually wins on economics. But there are real cases where it doesn't.
ECR advantages for AWS teams:
- No per-seat pricing - you pay for storage and transfer, not user count
- Same-region pulls to ECS, EKS, Fargate, Lambda are free (zero data transfer cost for the dominant production pattern)
- Native IAM access control without additional auth configuration
- No pull rate limits for in-region compute
Docker Hub makes sense for teams distributing public images or building on non-AWS infrastructure. The per-seat pricing model ($7-9/month per user for private repositories) can be cheaper than ECR for very small teams with large images and infrequent pulls. The pull rate limits (100 pulls/6 hours for unauthenticated, 200/6 hours for free authenticated accounts) are a real problem for CI/CD-heavy teams, though paid plans raise these limits. For AWS-native teams, Docker Hub is mostly useful as an upstream for ECR pull-through cache.
GitHub Container Registry (GHCR) is compelling for GitHub-native teams. Storage is included in GitHub plans (packages storage allowance), and GitHub Actions integration works without additional auth configuration. For teams already paying for GitHub Team or Enterprise, GHCR storage up to the plan's included amount is effectively free. The trade-off: no native IAM integration, and you're adding a GitHub dependency to your AWS deployment pipeline.
Decision framework:
| Scenario | Recommendation |
|---|---|
| AWS-native workloads (ECS, EKS, Fargate, Lambda) | ECR (free same-region transfer wins) |
| Public open-source images | ECR Public or GHCR |
| GitHub Actions primary CI/CD, small team | GHCR (plan storage included) |
| Cross-platform (AWS + GCP + on-prem) | Evaluate GHCR or Artifactory |
| Docker Hub dependency (upstream images) | ECR pull-through cache to avoid rate limits |
For teams choosing between ECS launch types that affects how ECR pulls work, this Amazon ECS vs Fargate guide covers the tradeoffs.
Key Takeaways
Amazon ECR pricing is simple on paper ($0.10/GB storage, same-region transfer free), but the real cost picture includes charges from three other AWS services that don't appear under the ECR line item on your bill.
Here's what to do from here:
-
Implement lifecycle policies on every repository - this is the highest-ROI optimization and it's a one-time configuration. Use the CDK examples above to codify it so every new repository gets lifecycle rules by default.
-
Check your Cost Explorer for Amazon Virtual Private Cloud data processing charges - if you see consistent DataProcessing-Bytes charges that correlate with deployment frequency, you're paying NAT Gateway tax on ECR pulls. VPC endpoints are the fix.
-
Audit your Amazon Inspector enhanced scanning mode - if you have continual scanning enabled, calculate the rescan cost: (total images retained) x (average rescans per month) x $0.01. Compare to on-push mode at $0.09 per new image. For most teams, on-push is sufficient and significantly cheaper.
-
For compliance teams: evaluate the ECR Archive storage class (November 2025) for images you must retain but never pull. The 90-day minimum storage requirement is the key constraint to plan around.
-
For multi-region deployments: set up Cross-Region Replication rather than relying on cross-region pulls. Replicate once, pull locally for free.
If you're seeing ECR costs that don't match your storage + transfer math, drop a comment below. The hidden dimensions are almost always NAT Gateway processing or Inspector scanning, and once you know where to look it's straightforward to diagnose. For a deeper look at the compute costs that accompany ECR workloads, the Amazon ECS pricing guide covers Fargate and EC2 pricing in detail. To estimate your specific NAT Gateway spend if you're routing ECR pulls through one, the NAT Gateway pricing guide has the full cost model.