Amazon S3 Pricing Explained: All Costs + Calculator

Amazon S3 costs more than storage. Learn all 8 storage classes with real prices, egress costs, bill shock scenarios, and use the free S3 pricing calculator.

March 13th, 2026
0 views
--- likes

If you've ever opened your AWS bill and wondered why the S3 line is higher than expected, you're not alone. S3 has a reputation for being cheap and simple. That reputation breaks down the moment you realize there are six separate billing dimensions, not just storage - and most guides teach you the pricing page without explaining why real bills diverge from the math you'd do in your head.

S3 Glacier Deep Archive costs ~$0.00099/GB/month. S3 Standard costs $0.023/GB/month. That's a 23x difference - and most teams aren't storing data in the right class for their actual access patterns.

I've analyzed a lot of S3 configurations through CloudBurn and the pattern is consistent: the gap between the right and wrong storage class choice is enormous, and most teams don't realize it until the bill arrives.

This guide covers every S3 pricing dimension with current numbers, walks through real cost scenarios for common workloads, explains the bill shock situations that send people to Stack Overflow at 11pm, and shows you how to implement cost controls in AWS CDK. If you want to model your specific usage, the S3 pricing calculator lets you compare storage classes and estimate monthly costs without reading a wall of rate tables.

All prices are for US East (N. Virginia) as of March 2026. Prices vary by region.

What S3 Actually Costs (The Short Answer)

S3 is pay-as-you-go with no minimum fees and no upfront commitment. There's no idle cost for an unused bucket, and data ingress (uploads) is always free. But that simplicity ends there.

Most pricing articles focus exclusively on storage costs. The other five billing dimensions are where surprise bills come from. Here's the full picture before we go deep on each component.

The 6 Cost Components

Amazon S3 has six distinct pricing dimensions, each triggered by different usage patterns:

  1. Storage - what you store, charged per GB/month by storage class. The most visible cost, but not always the largest.
  2. Requests and data retrievals - every API call costs money. PUT/COPY/POST/LIST requests, GET requests, and archive retrieval requests are all billed. Even browsing the S3 console generates charges.
  3. Data transfer - moving data out of S3 to the internet or to another AWS region. Ingress is free; egress isn't (beyond the first 100 GB/month).
  4. Management and analytics - S3 Storage Lens, S3 Inventory, Storage Class Analysis, S3 Metadata, and Object Tagging.
  5. Replication - Cross-Region Replication (CRR) and Same-Region Replication (SRR) double your storage and add transfer costs for CRR.
  6. Transform and query - S3 Object Lambda and S3 Select, which let you process data in-place.

A few things that are genuinely free: DELETE and CANCEL requests, data transfer between S3 buckets in the same region, and data transfer from S3 to CloudFront.

How the Free Tier Changed in July 2025

If you've used AWS for a while, you probably remember the legacy S3 free tier: 5 GB of S3 Standard storage, 20,000 GET requests, and 2,000 PUT requests per month for 12 months. That model ended on July 15, 2025 for new accounts.

New AWS accounts now receive up to $200 in AWS Free Tier credits applicable to eligible services including S3. These credits are available on a free plan for 6 months after account creation, and all credits must be used within 12 months. If you upgrade to a paid plan, remaining credits automatically apply to your bill.

This matters for two reasons. First, most competitor articles still describe the old model or are silent on this change - so if you've read about "5 GB free forever" recently, that information is outdated for accounts created after July 15, 2025. Second, the $200 credit model is more generous for new accounts doing heavy initial testing, since it's a flat dollar amount that applies across all AWS services rather than being locked to specific S3 usage limits.

Accounts created before July 15, 2025 retained the original free tier terms.

Storage class selection is where the pricing decisions actually happen - that single choice can reduce your monthly bill by over 95%.

S3 Storage Classes and What They Cost

There are 8 storage classes for general purpose buckets. Choosing correctly is the highest-ROI decision you can make for S3 costs. Each class makes a different trade-off between storage cost, retrieval speed, minimum storage duration, and retrieval fees.

A few things worth stating upfront: storage classes are set per object, not per bucket. You can have objects in S3 Standard and S3 Glacier Deep Archive within the same bucket. And a single bucket can hold objects across all classes except S3 Express One Zone (which requires a directory bucket). The AWS storage class overview covers the full class specifications if you need the reference-level detail.

Here's the full comparison table before we go through each class:

Storage ClassStorage Price/GB/monthMin DurationMin Object SizeRetrieval LatencyAZs
S3 Standard$0.023 (first 50 TB)NoneNoneMilliseconds>= 3
S3 Intelligent-TieringVaries by tierNoneNone (< 128 KB not auto-tiered)Milliseconds (FA/IA/AIA)>= 3
S3 Express One ZoneLower than StandardNoneNoneSingle-digit ms1
S3 Standard-IA$0.012530 days128 KBMilliseconds>= 3
S3 One Zone-IA$0.0130 days128 KBMilliseconds1
S3 Glacier Instant Retrieval$0.00490 days128 KBMilliseconds>= 3
S3 Glacier Flexible Retrieval$0.003690 daysNone*Minutes to 12 hours>= 3
S3 Glacier Deep Archive~$0.00099180 daysNone*9-48 hours>= 3

*Glacier Flexible Retrieval and Deep Archive add 40 KB of metadata overhead per object (8 KB at S3 Standard rates + 32 KB at the respective archive class rate). This matters for workloads with millions of small archived objects.

S3 Standard

S3 Standard is the default and baseline class for frequently accessed data - anything you expect to read more than once per month. No minimum storage duration, no minimum object size, no retrieval fees.

Pricing in us-east-1:

  • Storage: $0.023/GB/month for first 50 TB, $0.022/GB (next 450 TB), $0.021/GB (over 500 TB)
  • PUT/COPY/POST/LIST requests: $0.005 per 1,000 requests
  • GET/SELECT and other requests: $0.0004 per 1,000 requests

The tiered storage pricing kicks in at real scale - 50 TB is roughly $1,150/month - so for most teams, you're paying the flat $0.023 rate.

Right for: active application data, frequently read files, content delivery, anything where you need consistent millisecond access without worrying about retrieval fees.

S3 Intelligent-Tiering (and When It's Actually Worth It)

S3 Intelligent-Tiering is the "set and forget" option for data with unpredictable or changing access patterns. It automatically moves objects between five access tiers based on how often they're actually accessed.

The five tiers:

  • Frequent Access: S3 Standard rates (objects that were accessed recently)
  • Infrequent Access: ~40% cheaper than Standard (objects not accessed for 30 days)
  • Archive Instant Access: ~68% cheaper than Standard (objects not accessed for 90 days)
  • Archive Access (optional, activate it): asynchronous retrieval in minutes to hours (90+ days)
  • Deep Archive Access (optional, activate it): up to 48-hour retrieval (180+ days)

The monitoring fee: $0.0025 per 1,000 objects/month. Objects under 128 KB are exempt from both monitoring and auto-tiering - they always stay in the Frequent Access tier at Standard rates with no monitoring charge.

What makes it genuinely different from Standard-IA: no retrieval fees across any tier. That's the key advantage. With Standard-IA you pay per-GB when you read data. With Intelligent-Tiering, you don't.

When it's worth it: objects >= 128 KB where you're not certain of the access pattern. If objects sit untouched for 30 days, Intelligent-Tiering moves them to IA automatically - 40% savings with no action on your part. For 90-day quiet periods, you're at 68% savings.

When it's not worth it: objects you know are accessed frequently (monitoring fee with no benefit), objects under 128 KB (no auto-tiering, monitoring is free, but no savings either), or workloads where you know the access pattern precisely and can use a cheaper explicit class like Standard-IA.

S3 Standard-IA and One Zone-IA

S3 Standard-IA costs $0.0125/GB/month - roughly 46% cheaper than S3 Standard. It's designed for long-lived data accessed once per month or less. The trade-offs:

  • 30-day minimum storage duration (delete at day 15, you still pay for 30 days)
  • 128 KB minimum billable object size (a 50 KB file is billed as 128 KB)
  • Per-GB retrieval fees apply on GET requests

The 128 KB trap catches people constantly. If you store thousands of small files (configs, thumbnails, short logs) in Standard-IA, each one is billed as 128 KB regardless of actual size. For workloads with many small objects, Standard can be cheaper than Standard-IA.

S3 One Zone-IA costs $0.01/GB/month - about 57% cheaper than S3 Standard. Same 30-day minimum and 128 KB minimum as Standard-IA. The key difference: objects live in a single Availability Zone. If that AZ is destroyed, data is lost. Use this only for data that's recreatable - cache dumps, derived thumbnails, secondary backups that can be rebuilt from a primary copy.

S3 Glacier - Three Archive Tiers Compared

All three Glacier classes store data across 3+ AZs with 11-nine durability. The differences are retrieval speed and cost.

S3 Glacier Instant Retrieval at $0.004/GB/month: millisecond retrieval (same latency as Standard-IA), 90-day minimum duration, 128 KB minimum billable size. Per-GB retrieval fees apply. Best for quarterly-access data where you might need it suddenly - compliance reports, ML datasets you pull a few times a year.

S3 Glacier Flexible Retrieval at $0.0036/GB/month: restore time ranges from 1-5 minutes (Expedited, charged), 3-5 hours (Standard, charged), or 5-12 hours (Bulk, free). 90-day minimum duration. No minimum object size, but 40 KB of metadata overhead per object (8 KB at Standard rates + 32 KB at Glacier Flexible rates). For truly asynchronous access where you can plan retrievals in advance, the free Bulk tier makes this class very cost-effective. Provisioned Capacity Units ($100/unit/month) guarantee Expedited retrieval availability if you need it reliably.

S3 Glacier Deep Archive at approximately $0.00099/GB/month: the cheapest storage class in S3. Retrieval takes 9-48 hours (standard default is 12 hours). 180-day minimum storage duration. Same 40 KB metadata overhead per object. Right for compliance archives, legal records, media masters - anything you're legally required to retain but genuinely expect to access less than once per year.

The math on Deep Archive is striking: 10 TB stored for a year costs roughly $120. The same 10 TB in S3 Standard costs $2,830. That gap is why getting archive classification right has such high ROI.

S3 Express One Zone

S3 Express One Zone is the performance tier: single-digit millisecond latency, up to 10x faster than S3 Standard, and 50% lower request costs. It's purpose-built for latency-sensitive workloads where speed matters more than geographic redundancy.

The catch: it's single-AZ only, stored in a directory bucket (not a general purpose bucket), and available only in specific Availability Zones within select regions. Data is lost if the AZ is destroyed.

It also supports the RenameObject API, which no other S3 storage class does - a useful feature for workflows that need atomic rename operations.

Right for: ML training data being fed to compute in the same AZ, real-time analytics, gaming leaderboards, applications where latency is the primary constraint and you can tolerate the AZ-level durability trade-off.

Note: exact per-GB storage pricing for S3 Express One Zone wasn't captured in our research. Verify current figures directly from the S3 pricing page before making storage class decisions.

Storage class selection drives the biggest cost impact, but request charges are the line item that makes real bills diverge from a back-of-envelope storage calculation.

Request Costs - The Charge Most People Forget

Every S3 API call has a price. This includes SDK calls, CLI operations, and browsing the S3 console - which generates GET and LIST requests at the exact same rates as programmatic access.

The key rates for S3 Standard (us-east-1):

Request TypePrice
PUT, COPY, POST, LIST$0.005 per 1,000 requests
GET, SELECT, and all other requests$0.0004 per 1,000 requests
DELETE, CANCELFree
Lifecycle Transition to Standard-IA$0.01 per 1,000 requests
Lifecycle Transition to Glacier Flexible Retrieval$0.05 per 1,000 requests
Lifecycle Transition to Glacier Deep Archive$0.05 per 1,000 requests

LIST requests are always charged at S3 Standard PUT/COPY/POST rates regardless of which storage class is being listed. It's a detail that catches people who assume listing a Glacier bucket is cheaper than listing Standard.

PUT vs GET Rates (and Why the Difference Matters)

Writes cost 12.5x more than reads: $0.005 per 1,000 PUT requests vs $0.0004 per 1,000 GET requests. For most workloads this is fine - you read more than you write, so your GET total dominates but at a lower rate.

The workloads where this reversal hurts: IoT telemetry, application logging, real-time event pipelines. An IoT platform writing 50M sensor readings per month pays $250 in PUT charges alone. If that same system had 50M GET requests reading back that data, it would only pay $20. When you're building write-heavy systems, request costs deserve explicit attention in your cost model.

At 10 million PUT requests: $50. At 100 million PUT requests: $500. These numbers aren't alarming for large-scale production systems, but they're invisible until they show up on a bill.

Lifecycle Transition Request Charges

Moving objects between storage classes via lifecycle rules isn't free from a request standpoint. Each transition generates a per-request charge:

  • Transition to Standard-IA: $0.01 per 1,000 requests (2x the Standard PUT rate)
  • Transition to Glacier Flexible Retrieval or Deep Archive: $0.05 per 1,000 requests (10x the Standard PUT rate)

For buckets with millions of small objects, lifecycle transition request costs can exceed the storage savings in the first month. AWS changed the default behavior in September 2024: objects under 128 KB are no longer transitioned by lifecycle rules by default. This prevents the common trap where transitioning large numbers of small objects generates more in transition charges than the storage cost difference ever justifies.

The transition charges don't include data retrieval fees - lifecycle transitions happen server-side without pulling data. But the per-request ingestion charges at the destination class rates still apply.

Requests are significant, but data transfer costs are what truly blindside most teams, especially when serving content to users outside AWS.

Data Transfer - Where S3 Costs Really Add Up

The most common source of unexpected S3 bills. The rules are simple in principle, confusing in practice.

What is genuinely free:

  • Data transferred IN from the internet (uploads)
  • Data transferred OUT to the internet - first 100 GB/month free (aggregated across all AWS services and regions, except China and GovCloud)
  • Data transferred between S3 buckets in the same AWS region
  • Data transferred from S3 to any AWS service within the same region (including different accounts)
  • Data transferred from S3 to Amazon CloudFront - always free, no volume cap

What costs money:

Transfer TypeRate
Internet egress (next 10 TB/month after free tier)$0.09/GB
Internet egress (next 40 TB/month)$0.085/GB
Internet egress (next 100 TB/month)$0.07/GB
Internet egress (over 150 TB/month)Contact AWS
Between US regions (e.g., us-east-1 to us-east-2)$0.01/GB out
From US to most other regions$0.02/GB out
S3 Multi-Region Access Points routing$0.0033/GB routed

What's Free and What Isn't

A quick reference table for the scenarios that come up most often:

Transfer ScenarioCost
Upload from internet to S3Free
Download from S3 to internet (first 100 GB/month)Free
Download from S3 to internet (beyond 100 GB/month)$0.09/GB
S3 to CloudFrontFree
Between S3 buckets (same region)Free
EC2 to S3 (same region)Free
EC2 to S3 via NAT GatewayFree for S3 transfer, but NAT charges $0.045/GB processing
S3 to EC2 (different region)$0.01-0.02/GB depending on regions
S3 Transfer AccelerationAdditional cost on top of standard rates

Serving Public Content - CloudFront vs Direct S3

S3-to-CloudFront transfer is always free. This is the architectural escape hatch for egress costs. If you serve any public content from S3 that exceeds 100 GB/month, putting CloudFront in front is almost always the right call.

The math for 1 TB/month of public content:

  • Direct S3: first 100 GB free, remaining 924 GB x $0.09 = ~$83 in S3 egress
  • Via CloudFront (90% cache hit rate): ~100 GB reaches S3 (10% of requests), 924 GB served from CloudFront edge. S3 egress to CloudFront = free. CloudFront egress ~$0.085/GB for the 924 GB = ~$78. Plus CloudFront request charges.

That's roughly cost-neutral at this scale, but the cache hit rate matters enormously. At 95%+ cache hit rate (common for static assets), CloudFront becomes meaningfully cheaper than direct S3. For reference, the CloudFront pricing guide covers CloudFront's own egress rates and how to model the full CDN cost.

CloudFront also reduces S3 GET requests by serving cached objects. A video hosting platform documented cutting S3 GET requests by ~50% through CloudFront cache tuning - which translates directly to lower S3 request charges.

For the CloudFront pricing calculator, you can model the full CDN cost to compare against direct S3 egress for your specific volume.

Inter-Region Transfer Costs

Cross-region replication doubles your storage cost and adds $0.02/GB in transfer fees for CRR from US regions. The math on 1 TB replicated to 2 additional regions:

  • 3x storage cost (3 copies)
  • $20/month in S3 transfer fees per 1 TB replicated

This adds up quickly for petabyte-scale data. Before enabling CRR, verify whether Same-Region Replication (SRR) meets your requirements - SRR doesn't incur inter-region transfer charges, though you're still paying for duplicated storage.

Understanding why bills are high is useful. But what most people actually want are the specific scenarios that send S3 bills to unexpected places.

The Hidden Costs That Cause Surprise Bills

S3 has a reputation for being cheap and simple. That reputation breaks down in five specific situations.

The Empty Bucket Problem (Unauthorized Requests)

A publicly discoverable S3 bucket can receive GET and LIST requests from bots, crawlers, and automated security scanners - none of which you authorized, all of which incur request charges at the standard rates. At $0.0004 per 1,000 GET requests, it takes a lot of automated traffic to rack up a meaningful bill. But at millions of requests per day, it happens.

The fix is straightforward: enable Block Public Access at the account level, not just the bucket level. This prevents your bucket from being accessible to unauthenticated requests even if bucket policy configuration is imperfect. For organizations with multiple accounts, enforcing Block Public Access via AWS Organizations SCPs ensures it can't be disabled at the bucket level.

Enable S3 access logging to identify unexpected request sources. If your logs show heavy traffic from IP ranges you don't recognize, you're dealing with this scenario.

NAT Gateway - The Invisible S3 Cost Multiplier

This is the single most impactful hidden cost for applications running workloads in private subnets that read and write S3 frequently.

When EC2 instances in a private subnet access S3 through a NAT Gateway, every byte processed incurs NAT Gateway data processing charges ($0.045/GB) before S3 even enters the picture. At 1 TB/month of S3 traffic routed through NAT Gateway, that's $45/month in NAT charges on top of any S3 transfer costs.

The fix: S3 Gateway VPC Endpoints are free. They let EC2 instances access S3 directly without routing through a NAT Gateway, eliminating the $0.045/GB processing fee entirely. Configuration is a one-time change: create a Gateway Endpoint for S3 in your VPC and update route tables for private subnets.

ML training jobs, data pipeline workers, and backup scripts running in private subnets get the most benefit from this change. See the NAT Gateway pricing guide for 6 strategies to reduce NAT costs across the board, not just for S3 access.

For VPC architecture and full NAT Gateway cost analysis, the VPC pricing guide covers the complete picture.

Glacier Retrieval Spikes

Glacier Flexible Retrieval and Glacier Deep Archive charge per-GB retrieval fees when you restore objects. Bulk retrievals from Glacier Flexible are free - but Standard and Expedited retrievals are not.

A single unplanned restore of a large archive can add significant cost to a bill:

  • Expedited retrieval from Glacier Flexible: charged per-GB plus per-request
  • Standard retrieval from Glacier Flexible: charged per-GB plus per-request
  • Deep Archive Standard retrieval: per-GB retrieval fee for 10+ TB of video masters adds up fast

Provisioned Capacity Units ($100/unit/month) guarantee Expedited retrieval availability from Glacier Flexible - worth considering only if you have unpredictable high-volume urgent restore requirements.

The mitigation: use Glacier Instant Retrieval for data you might need suddenly (millisecond retrieval, no surprise restore fees). Reserve Glacier Flexible and Deep Archive for data where you can plan retrievals in advance and deliberately select the Bulk tier.

Orphaned Multipart Uploads

Large file uploads use S3's multipart upload API, which splits files into chunks uploaded in parallel. If an upload fails mid-way or an application abandons it, the incomplete parts stay in S3 and are charged as stored bytes - indefinitely, unless you explicitly clean them up.

In accounts with applications that upload large files regularly, orphaned multipart uploads accumulate silently. S3 Storage Lens can identify buckets where this is happening. The fix is a lifecycle rule:

abortIncompleteMultipartUploadAfter: 7 days

Seven days gives in-progress uploads time to complete while cleaning up genuinely abandoned uploads before they compound. I'll show the CDK implementation in the IaC section below.

Minimum Storage Duration Charges

Three storage classes have minimum storage durations that apply even when you delete objects early:

  • Standard-IA and One Zone-IA: 30-day minimum. Delete at day 1, you're billed for 30 days.
  • Glacier Instant Retrieval and Glacier Flexible Retrieval: 90-day minimum
  • Glacier Deep Archive: 180-day minimum

The common trap: automated cleanup scripts that delete objects from Standard-IA before 30 days have elapsed. The delete operation itself is free, but you're paying for the remaining duration anyway, and the storage savings you expected never materialize.

The rule of thumb: only use Standard-IA or Glacier classes for data you're confident will remain stored for at least the minimum duration. Short-lived objects should stay in S3 Standard.

Understanding why bills are high is the foundation. Now let's put real numbers to common workloads.

Real-World S3 Cost Scenarios

These scenarios answer the questions that actually drive people to search for S3 pricing: what will my specific workload actually cost?

Static Website (30,000 Pageviews/Month)

A small static site with 5 GB of HTML, CSS, JavaScript, and images:

Cost ComponentCalculationMonthly Cost
Storage (5 GB, S3 Standard)5 GB x $0.023$0.12
GET requests (~120,000/month)120,000 / 1,000 x $0.0004$0.05
Egress (~10 GB to internet)Within 100 GB free tier$0.00
Total without CloudFront~$0.17/month

With CloudFront in front: CloudFront costs approximately $0.85-1.50/month at this scale, but you get better latency, HTTPS handling, caching, and global edge distribution. The S3 egress cost drops to zero (free S3-to-CloudFront transfer), and you add CloudFront's own charges.

At this scale, the direct S3 approach is cheaper by pure dollar math. Once you cross 100 GB/month in egress, the calculation flips. The S3 pricing calculator can model the break-even point for your specific traffic volume.

SaaS App with User File Uploads (10,000 Users, Avg 50 MB Each)

500 GB of user files, assuming active access for recent uploads and aging access for older ones:

Cost ComponentCalculationMonthly Cost
Storage (500 GB, S3 Standard)500 GB x $0.023$11.50
PUT requests (~50,000/month)50,000 / 1,000 x $0.005$0.25
GET requests (~200,000/month)200,000 / 1,000 x $0.0004$0.08
Egress (~100 GB, at/near free tier)~0 GB over free tier~$0.00
Total~$12-13/month

At 10x scale (100,000 users, 5 TB of files), storage becomes dominant. At that point, adding a lifecycle rule to transition files not accessed in 90 days to Standard-IA saves roughly 46% on aging data. For files with genuinely unpredictable access patterns, S3 Intelligent-Tiering becomes worth evaluating as the default storage class. Run your own numbers through the S3 pricing calculator to model the break-even between Standard and Intelligent-Tiering at your scale.

Application Log Retention (90-Day Policy)

Pattern: logs arrive daily, retained 90 days for audit compliance, then deleted.

Without lifecycle policy: 100 GB of daily new logs x 90 days accumulation = 9 TB stored at $0.023/GB = $207/month.

With lifecycle policy (Standard for 30 days, Standard-IA for days 30-90, delete at day 90). At steady state, the lifecycle means you hold approximately 9 TB total across both tiers simultaneously:

Storage TierVolumeCost
Days 0-30 in Standard (3 TB average)3 TB x $0.023$69
Days 30-90 in Standard-IA (6 TB)6 TB x $0.0125$75
Lifecycle transition charges~180 transition requests~$0
Total with lifecycle~$144/month

That's a 30% reduction with a single lifecycle rule. If you can tolerate the Glacier retrieval time (audit requests come rarely), moving logs to Glacier Flexible at day 90 instead of deleting them cuts the 30-90 day storage cost from $75 to ~$22, with free Bulk retrieval when auditors actually need something.

ML Training Dataset Archive (10 TB, Accessed Quarterly)

Storage costs at 10 TB across relevant classes:

Storage ClassMonthly Storage CostNotes
S3 Standard$235.52/month10 TB x $0.023/GB
S3 Glacier Instant Retrieval$40.96/month10 TB x $0.004/GB - millisecond retrieval
S3 Glacier Flexible Retrieval (Bulk)$36.86/month10 TB x $0.0036/GB - free Bulk retrieval
S3 Glacier Deep Archive~$10.13/month10 TB x ~$0.00099/GB - 9-48hr restore

If your ML training runs are scheduled (you know 48-72 hours in advance when you need the data), Glacier Flexible Retrieval with Bulk tier retrieval costs $36.86/month vs $235.52/month for Standard. That's an 84% savings for 10 TB that you access four times per year, at the cost of planning your restores in advance.

If you need spontaneous access with no pre-warming, Glacier Instant Retrieval at $40.96/month is still an 83% saving over Standard. Use the S3 pricing calculator to compare archive class costs for your specific dataset size and retrieval frequency.

With real numbers on the table, the next step is systematically reducing your S3 bill.

How to Reduce Your S3 Bill

The right approach to S3 cost optimization is to get visibility first, then act on what you find. Blindly applying optimizations without understanding your actual access patterns is how you end up paying more (wrong storage class, early deletion charges, lifecycle transition costs exceeding savings).

A video hosting platform with S3 as 40% of their infrastructure cost achieved a 70% reduction in their six-figure annual S3 bill - but they started with S3 access logging and Amazon Athena queries to understand where costs were actually coming from. They found that 88% of their S3 costs came from a small fraction of files. You can't optimize what you can't see. AWS also maintains an S3 cost optimization guide with additional best practices.

Step 1 - Get Visibility First (Before You Optimize Anything)

Three tools that give you what you need:

S3 Storage Lens (free tier available): organization-wide dashboard for storage usage, activity trends, and cost optimization recommendations. The free tier gives you 28 metrics with 14-day retention. Advanced tier adds 15-month retention, CloudWatch publishing, and prefix-level aggregation - worth it at multi-TB scale where prefix-level attribution matters.

S3 Storage Class Analysis: monitors access patterns on specific buckets over a configurable time window and tells you when objects are candidates for transition to Standard-IA. Run it for 30+ days before making transition decisions - it needs time to establish baselines.

Cost Allocation Tags: tag S3 buckets with Environment, Team, Project, and CostCentre. Activate tags in the AWS Billing Console. Use Cost Explorer to filter S3 spend by business unit. Without tags, your S3 line in Cost Explorer is an undifferentiated blob. For multi-account setups, the AWS cloud foundation guide covers how to structure S3 cost attribution across organizational units.

AWS Cost and Usage Report (CUR) provides per-request, per-storage-class breakdowns if you need granular attribution beyond Cost Explorer.

Step 2 - Set Lifecycle Policies

Lifecycle policies automate storage class transitions as data ages. For most teams, this is the highest-impact configuration change with the least ongoing effort.

The standard waterfall: data can only move from hot to cold, never reverse. A typical log retention lifecycle:

  • Days 0-30: S3 Standard
  • Days 30-90: S3 Standard-IA
  • Day 90+: S3 Glacier Flexible Retrieval or Glacier Deep Archive
  • Day 365+: Delete (expiration action)

Remember the September 2024 change: objects under 128 KB are skipped by lifecycle transitions by default. This is intentional - it prevents transition costs from exceeding the storage savings for small files. For buckets with many small objects, this default behavior saves you from a cost trap.

Lifecycle transitions don't incur data retrieval fees, but per-request ingestion charges apply at the destination class rates.

Step 3 - Route Public Traffic Through CloudFront

S3-to-CloudFront transfer is free. For any bucket serving public content where egress exceeds 100 GB/month, CloudFront is almost always the correct architectural choice. It eliminates per-GB S3 egress charges for cached content and reduces S3 GET requests through caching.

The video platform case study achieved ~50% reduction in S3 GET and retrieval requests through CloudFront cache tuning alone - which cut both request charges and Glacier retrieval fees.

Step 4 - Use VPC Endpoints for Internal S3 Access

S3 Gateway VPC Endpoints are free. Creating one in your VPC eliminates the NAT Gateway processing fee ($0.045/GB) for all EC2-to-S3 traffic from private subnets.

If your application processes 1 TB/month through NAT Gateway to reach S3, that's $45/month saved with a one-time route table configuration. For ML training jobs, data pipelines, and backup processes running in private subnets, this is usually the fastest payback optimization in the VPC cost stack.

Step 5 - Clean Up Unnecessary Storage

Four cleanup actions with immediate impact:

  1. Abort incomplete multipart uploads - lifecycle rule with 7-day abort window. Silently accumulated parts are charged as stored bytes.
  2. Expire noncurrent object versions - versioned buckets accumulate old versions indefinitely unless you set noncurrentVersionExpiration. Set it to 90 days as a starting point.
  3. Delete markers expiration - versioned buckets also accumulate delete markers. These are small but add up at scale.
  4. Storage Class Analysis before acting - run it for 30+ days to establish access baselines before committing to any storage class transition strategy.

These optimizations work best when codified in Infrastructure as Code from the start, rather than applied reactively after your bill arrives.

Implementing Lifecycle Policies with AWS CDK

CDK's Bucket L2 construct supports lifecycle rules and Intelligent-Tiering configuration natively. Defining these in code ensures they're applied at bucket creation, not added as a manual afterthought when a bill arrives.

For teams new to S3 buckets in CDK, the how to create an Amazon S3 bucket with AWS CDK guide covers the foundational setup before adding lifecycle rules.

The lifecycle waterfall maps clearly to a state machine:

Cost-Optimized Bucket (Multi-Tier Lifecycle)

This pattern handles a versioned bucket with progressive storage class transitions, noncurrent version management, and multipart upload cleanup:

import * as s3 from 'aws-cdk-lib/aws-s3';
import { Duration, RemovalPolicy } from 'aws-cdk-lib';
import { Construct } from 'constructs';

// Cost-optimized S3 bucket with lifecycle transitions
const bucket = new s3.Bucket(this, 'CostOptimizedBucket', {
  versioned: true,
  encryption: s3.BucketEncryption.S3_MANAGED,
  blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
  enforceSSL: true,

  lifecycleRules: [
    {
      // Transition current version objects through storage tiers
      enabled: true,
      transitions: [
        {
          storageClass: s3.StorageClass.INFREQUENT_ACCESS,
          transitionAfter: Duration.days(30), // ~46% storage savings vs Standard
        },
        {
          storageClass: s3.StorageClass.GLACIER_INSTANT_RETRIEVAL,
          transitionAfter: Duration.days(90), // ~83% savings, millisecond retrieval
        },
        {
          storageClass: s3.StorageClass.DEEP_ARCHIVE,
          transitionAfter: Duration.days(365), // ~96% savings, 9-48hr restore
        },
      ],
    },
    {
      // Clean up noncurrent versions to avoid unbounded versioning costs
      enabled: true,
      noncurrentVersionExpiration: Duration.days(90),
      noncurrentVersionTransitions: [
        {
          storageClass: s3.StorageClass.INFREQUENT_ACCESS,
          transitionAfter: Duration.days(30),
        },
      ],
    },
    {
      // Delete incomplete multipart uploads (common hidden cost)
      enabled: true,
      abortIncompleteMultipartUploadAfter: Duration.days(7),
    },
  ],
});

The three lifecycle rules each target a different cost driver: the first manages active data transitions, the second prevents noncurrent version accumulation (a silent storage cost with versioned buckets), and the third cleans up orphaned multipart uploads.

All available StorageClass constants in CDK:

import { aws_s3 as s3 } from 'aws-cdk-lib';

s3.StorageClass.INFREQUENT_ACCESS          // S3 Standard-IA
s3.StorageClass.ONE_ZONE_INFREQUENT_ACCESS // S3 One Zone-IA
s3.StorageClass.INTELLIGENT_TIERING        // S3 Intelligent-Tiering
s3.StorageClass.GLACIER                    // S3 Glacier Flexible Retrieval
s3.StorageClass.GLACIER_INSTANT_RETRIEVAL  // S3 Glacier Instant Retrieval
s3.StorageClass.DEEP_ARCHIVE               // S3 Glacier Deep Archive

Intelligent-Tiering Bucket Configuration

For data with unpredictable access patterns, Intelligent-Tiering as the default class removes the guesswork entirely:

// Bucket with S3 Intelligent-Tiering for unpredictable access patterns
const intelligentBucket = new s3.Bucket(this, 'IntelligentTieringBucket', {
  versioned: false,
  encryption: s3.BucketEncryption.S3_MANAGED,
  blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,

  intelligentTieringConfigurations: [
    {
      name: 'ArchiveConfig',
      // Activate asynchronous archive tiers (optional - only if acceptable)
      archiveAccessTierTime: Duration.days(90),    // Archive Access tier
      deepArchiveAccessTierTime: Duration.days(180), // Deep Archive Access tier
    },
  ],

  lifecycleRules: [
    {
      // Move objects to Intelligent-Tiering immediately on upload
      enabled: true,
      transitions: [
        {
          storageClass: s3.StorageClass.INTELLIGENT_TIERING,
          transitionAfter: Duration.days(0), // Immediate transition
        },
      ],
    },
    {
      // Remove incomplete multipart uploads
      abortIncompleteMultipartUploadAfter: Duration.days(7),
    },
  ],
});

One thing to keep in mind with this configuration: objects under 128 KB stay in the Frequent Access tier regardless of access patterns. That's by design, not a misconfiguration - the monitoring overhead doesn't make sense for small objects.

For more patterns beyond S3, see the CDK best practices guide which covers project structure, testing, and deployment strategies.

If you're deploying CDK infrastructure, CloudBurn analyzes your lifecycle configurations and flags cost optimization opportunities directly in pull requests - before the bucket ever gets deployed.

Catch S3 Cost Issues in Code Review, Not on Your Bill

CloudBurn analyzes your AWS CDK and Terraform changes, showing S3 storage class and lifecycle cost estimates directly in pull requests. Fix expensive configurations when they take seconds to change, not weeks later.

New S3 Bucket Types and Pricing (2025-2026)

Beyond general purpose buckets, S3 added three purpose-built bucket types in 2024-2025. These carry their own pricing models and are relevant if you're working with analytics or ML workloads.

General purpose buckets remain the default for most use cases. The new types optimize for specific performance or workload characteristics at different price points.

S3 Tables (Apache Iceberg Analytics)

S3 Tables is purpose-built for analytics workloads using Apache Iceberg tables. AWS claims 3x faster query performance and 10x higher transactions per second compared to general purpose buckets for Iceberg workloads.

Pricing in us-east-1:

DimensionRate
Storage$0.0265/GB/month (Standard, first 50 TB)
PUT requests$0.005 per 1,000
GET requests$0.0004 per 1,000
Object monitoring$0.025 per 1,000 objects/month
Compaction (binpack)$0.002 per 1,000 objects + $0.005/GB processed

Storage is 15% more expensive than S3 Standard ($0.0265 vs $0.023). The object monitoring and compaction charges are the new additions. Table buckets themselves are free to create.

Right for: data lakehouse workloads on Iceberg, analytics teams using Apache Spark or Trino querying structured data in S3, workloads where the performance improvement from the native Iceberg integration justifies the storage premium.

S3 Vectors (Semantic Search / RAG Workloads)

S3 Vectors is purpose-built for vector storage and similarity search - the storage layer for RAG (Retrieval-Augmented Generation) applications and semantic search. AWS claims up to 90% lower cost compared to alternative vector storage solutions.

Pricing dimensions: PUT (per logical GB uploaded), storage (per GB/month), and query (per API call plus per TB data processed). Vector bucket creation is free.

Right for: ML teams building RAG pipelines, semantic search applications, teams storing and querying embeddings for LLM applications.

Note: S3 Vectors is a newer service and full pricing tables weren't available in our research. Verify current pricing directly from aws.amazon.com/s3/pricing/ before planning any workload.

Regional Pricing Differences

Everything in this guide is priced for us-east-1 (N. Virginia) - the lowest-cost AWS region for S3. Other regions cost more, sometimes materially more.

General patterns:

RegionS3 Standard ($/GB/month)Egress to Internet ($/GB, first 10 TB)Relative to us-east-1
us-east-1 (N. Virginia)$0.023$0.09Baseline
us-west-2 (Oregon)$0.023$0.09Same
eu-west-1 (Ireland)$0.024$0.09~4% higher storage
eu-central-1 (Frankfurt)$0.0245$0.09~7% higher storage
ap-southeast-1 (Singapore)$0.025$0.12~9% higher storage, 33% higher egress
ap-northeast-1 (Tokyo)$0.025$0.114~9% higher storage, 27% higher egress

Verify current figures from aws.amazon.com/s3/pricing/ - regional rates change periodically.

For teams with data sovereignty requirements (GDPR compliance requires EU data to stay in EU), the regional premium isn't optional. But for data without residency constraints - ML training datasets, cold archives, backup copies - keeping storage in us-east-1 is the cheapest option. For a 10 TB Deep Archive in Asia Pacific vs us-east-1, you're looking at meaningful cost differences at that scale.

Management, Analytics, and Replication Costs

S3 charges for more than storage and transfer. Several management services add per-object or per-operation fees that compound at scale.

S3 Storage Lens: free tier covers 28 metrics with 14-day retention. Advanced tier unlocks 15-month retention, CloudWatch publishing, and prefix-level aggregation. Worth enabling the free tier immediately - it identifies multipart upload accumulation and storage class distribution at a glance.

S3 Inventory: generates CSV, ORC, or Parquet reports listing objects and their metadata. Charged per million objects listed. Useful for large buckets where you need a full inventory without paying for repeated LIST API calls.

S3 Batch Operations: automates operations across millions of objects at once. Charged at $0.25 per job plus $1 per million objects processed. A cost-effective way to run a one-time storage class migration or update object tags across an entire bucket.

S3 Metadata: charged at $0.30 per million metadata updates. Relevant for analytics workloads querying S3 metadata at scale.

Cross-Region Replication (CRR): doubles your storage costs (you pay for each copy) and adds $0.02/GB in transfer fees for every GB replicated from US regions. For 1 TB replicated to one additional region, that's $20/month in transfer fees on top of doubled storage. Same-Region Replication (SRR) incurs duplicated storage cost but no transfer fee. Before enabling CRR, verify whether SRR meets your requirements.

These costs are individually small but aggregate meaningfully in accounts with thousands of buckets or automated workflows running Batch Operations regularly.

The question many teams are actively asking: is S3 even the right choice, or does a provider like Cloudflare R2 change the math?

S3 vs Cloudflare R2 vs Backblaze B2 (Honest Comparison)

This section exists because "cloudflare r2 vs s3" is a high-volume search query and zero of the top-10 S3 pricing articles answer it. I'll try to be direct about where S3 wins and where it doesn't.

The fundamental difference: R2 and Backblaze B2 charge no egress fees (or near-zero). S3 charges $0.09/GB. For egress-heavy workloads serving content directly to end users, this changes the total cost calculation significantly.

Side-by-Side Pricing Comparison

FeatureAmazon S3Cloudflare R2Backblaze B2
Storage cost/GB/month$0.023 (Standard, us-east-1)$0.015$0.006
Egress to internet$0.09/GB (after 100 GB free)FreeFree (10 GB/day to non-CDN)
Egress to Cloudflare CDNN/AFreeFree
PUT requests$0.005/1,000$4.50/million$10/million ($0.01/1k)
GET requests$0.0004/1,000$0.36/million$4/million
Class A operations-$4.50/million-
Class B operations-$0.36/million-
Minimum object sizeNoneNoneNone
Minimum storage durationNone (Standard)NoneNone
S3 API compatibilityNativeYes (S3-compatible)Yes (S3-compatible)
Free tier$200 credits (new accounts)10 GB storage, 1M requests10 GB free forever
AWS ecosystem integrationNativeLimitedLimited
Lifecycle policiesYes (full)LimitedBasic
Intelligent-TieringYesNoNo
Storage classes811
Global CDN integrationVia CloudFrontBuilt-in (Cloudflare CDN)Via Cloudflare and others
Compliance certificationsExtensive (SOC 2, ISO, HIPAA, FedRAMP)GrowingLimited

Prices sourced from official provider pages as of March 2026. Verify before making decisions - these change.

When to Choose Each Provider

Choose S3 when:

  • Your application already runs on AWS and needs native service integration (Lambda, CloudFront, Athena, SageMaker, EMR, Glue all work with S3 natively and at no transfer cost within the same region)
  • You need the full storage class ecosystem - lifecycle policies, Intelligent-Tiering, Glacier archive tiers
  • You require enterprise compliance certifications (SOC 2, HIPAA, FedRAMP, ISO 27001)
  • Your dominant cost is storage, not egress (data you store more than you serve)
  • You read/write data primarily within AWS (no egress charges for same-region access)
  • You need S3-specific features: Storage Lens, Batch Operations, Object Lambda, replication

Consider Cloudflare R2 when:

  • You serve large volumes of public content directly to end users and egress is your dominant cost
  • You're already using Cloudflare for CDN and DNS (R2 integrates natively)
  • You don't have strong AWS ecosystem dependencies
  • Your workload is simple object storage without the need for lifecycle tiers or Intelligent-Tiering
  • Storage pricing at $0.015/GB vs $0.023/GB matters (35% cheaper on storage)

Consider Backblaze B2 when:

  • Pure backup or archive with minimal infrastructure requirements
  • Budget is the primary constraint and your team can operate outside AWS
  • Storage at $0.006/GB is the primary driver (4x cheaper than S3 Standard)
  • Your read patterns are minimal (B2's GET request pricing is higher than R2)

The honest nuance: most teams already running on AWS will find that the S3 ecosystem value (native AWS service integration, no transfer costs within AWS, Glacier tiers, managed compliance) outweighs the egress savings from R2 or B2 for their specific workload. But for teams building egress-heavy content platforms - video streaming, image serving, large file distribution - the math should be run explicitly.

A 10 TB/month egress workload on S3 costs ~$900/month in egress alone. On R2, that's $0 in egress. At that scale, the egress savings alone could justify the migration cost.

Key Takeaways

Amazon S3 pricing is genuinely pay-as-you-go with no minimum fees. But "pay as you go" across 6 billing dimensions means the math is more complex than storage-only thinking suggests.

The highest-impact decisions:

  1. Storage class selection is the primary lever. The difference between S3 Standard and Glacier Deep Archive is 23x on storage cost. Even moving infrequently accessed data from Standard to Standard-IA cuts that component by 46%. Get the class right for each workload.

  2. Get visibility before optimizing. S3 Storage Lens, Cost Allocation Tags, and access logging tell you where costs are actually coming from. The video platform that saved 70% started with logging and Athena, not guesswork.

  3. The free category is larger than most teams realize. Ingress, same-region transfers, S3-to-CloudFront transfer, and the first 100 GB of monthly egress are all free. Structure your architecture around these free paths.

  4. Lifecycle policies eliminate passive cost accumulation. Without them, data sits in Standard indefinitely while lifecycle transitions could be cutting your storage cost by 40-80% automatically.

  5. Bill shock has specific causes. Unauthorized requests to public buckets, NAT Gateway processing fees, Glacier retrieval spikes, orphaned multipart uploads, and minimum duration charges - all avoidable with specific configuration changes.

Run your specific workload through the S3 pricing calculator to see the actual dollar impact of a storage class change before implementing it. Input your storage volume, request patterns, and egress estimate and compare across all eight storage classes.

If you're working with the broader AWS cost picture, the AWS cost optimization framework applies the same discipline across your full infrastructure spend.

Frequently Asked Questions

How much does Amazon S3 cost per month?
It depends on what you store and how you access it. A simple estimate: 100 GB of S3 Standard storage costs $2.30/month in storage, plus request charges. A realistic SaaS application with 100 GB of user files, moderate request volumes, and minimal egress typically pays $5-15/month. Use the S3 pricing calculator to model your specific usage across all storage classes.
What is the cheapest S3 storage class?
S3 Glacier Deep Archive at approximately $0.00099/GB/month. But cheapest only makes sense if the trade-offs fit your use case: 180-day minimum storage duration, 9-48 hour retrieval times, and per-GB retrieval fees. For data you genuinely won't touch for years, it's the right choice. For data accessed occasionally, Glacier Flexible Retrieval or Glacier Instant Retrieval may be a better fit.
Is data transfer into Amazon S3 free?
Yes. AWS does not charge for data uploaded to S3 from the internet or from other AWS services. The first 100 GB/month transferred out to the internet is also free (aggregated across all AWS services and regions, except China and GovCloud). Data transferred from S3 to CloudFront is always free with no volume cap.
Is S3 Intelligent-Tiering worth it?
For objects >= 128 KB with unpredictable or changing access patterns, yes. The monitoring fee ($0.0025 per 1,000 objects/month) is offset by automatic storage savings when objects go 30+ days without access - 40% savings at the Infrequent Access tier, 68% at Archive Instant Access. For objects under 128 KB, the monitoring fee provides no benefit since they never auto-tier. Also worth noting: unlike Standard-IA, Intelligent-Tiering has no retrieval fees.
How can I tell which S3 storage class my objects are in?
S3 Storage Lens provides organization-wide visibility including per-bucket storage class breakdown. The S3 console shows storage class per-object in the Objects list view. S3 Storage Class Analysis monitors access patterns and identifies when objects are candidates for transition to Standard-IA. For granular detail, the AWS Cost and Usage Report breaks down costs per storage class.
Does S3 charge for the AWS Management Console?
Yes. Every GET, LIST, and HEAD request generated by browsing the S3 console incurs the same charges as SDK and CLI requests. At standard rates this is negligible for small buckets. For buckets with millions of objects, a paginated LIST operation from the console is a real cost. S3 Storage Lens is a better tool than console browsing for understanding large buckets.

Share this article on ↓

Subscribe to our Newsletter