AWS offers eight native cost estimation and management tools, and most of them are completely free. Yet most organizations only use a fraction of their capabilities, leaving significant optimization opportunities on the table.
In this guide, you'll learn how AWS Pricing Calculator, Cost Explorer, Budgets, Cost and Usage Report, Cost Anomaly Detection, Cost Optimization Hub, Compute Optimizer, and Migration Evaluator work together as an integrated ecosystem. I'll show you when to use each tool, how they share data, and how to automate cost controls with Infrastructure as Code.
Whether you're planning a new workload, monitoring existing spend, or hunting for optimization opportunities, understanding these tools is the foundation of effective cloud financial management.
Understanding the AWS Cost Management Ecosystem
Before diving into individual tools, it helps to see how AWS cost management fits together as a complete lifecycle. These eight tools aren't isolated utilities. They share data, complement each other's capabilities, and cover every phase of cost management from pre-deployment estimation through ongoing optimization.
The cost management lifecycle flows through distinct phases: Estimate (before you deploy), Monitor (as costs accrue), Alert (when thresholds are exceeded), Analyze (to understand patterns), and Optimize (to reduce waste). Different tools serve different phases, but they all pull from the same underlying billing data.
The Eight Core AWS Cost Tools
Here's what each tool does at a glance:
- AWS Pricing Calculator - Pre-deployment cost estimation for new workloads
- AWS Cost Explorer - Real-time cost analysis with 18-month forecasting
- AWS Budgets - Threshold alerts and automated cost control actions
- AWS Cost and Usage Report (CUR) - Most detailed billing data for custom analytics
- AWS Cost Anomaly Detection - ML-based detection of unexpected cost spikes
- AWS Cost Optimization Hub - Centralized recommendations across accounts and regions
- AWS Compute Optimizer - ML-powered rightsizing for compute resources
- Migration Evaluator - Free business case creation for cloud migrations
How AWS Cost Tools Work Together
Cost Explorer sits at the center of the ecosystem. Its data feeds Budgets for threshold monitoring, Anomaly Detection for ML-based spike identification, and provides the historical baseline for forecasting. Cost Optimization Hub aggregates recommendations from Compute Optimizer and other sources into a single dashboard.
Cost allocation tags tie everything together. When you tag resources consistently, those tags appear in Cost Explorer, Budgets, CUR, and Cost Categories, enabling cost attribution across all tools.
Free vs Paid Features
Most core functionality is free. Cost Explorer's console UI, unlimited monitoring budgets, Anomaly Detection, Cost Optimization Hub, and Compute Optimizer all cost nothing to use. The paid features are modest: Cost Explorer API at $0.01 per request, hourly granularity at $0.01 per 1,000 records monthly, and action-enabled budgets at $0.10 per day after the first two.
CUR generation is free, but you pay for S3 storage and Athena queries ($5 per TB scanned) when analyzing the data.
Pre-Deployment Cost Estimation with AWS Pricing Calculator
Estimating costs before deployment prevents surprises. AWS Pricing Calculator lets you model workloads and forecast monthly spend before committing any resources. The authenticated in-console version, generally available since May 2025, adds significant capabilities including discount modeling and historical usage import.
Workload Estimates vs Bill Estimates
The calculator supports two distinct modes. Workload estimates let you model specific applications or services. They're free and unlimited, available to all account types. This is what you'd use to estimate a new web application's EC2, RDS, and S3 costs.
Bill estimates model your entire consolidated bill, including the impact of changing Savings Plans or Reserved Instances. These require a management or standalone account and cost $2 each after 5 per month. Use bill estimates when evaluating commitment purchases or modeling organization-wide changes.
Key Features and Capabilities
The authenticated calculator offers three rate configuration options:
- Before discount rates - Standard On-Demand pricing
- After discount rates - With your negotiated pricing discounts applied
- After discounts and commitments - Including Savings Plans and Reserved Instances
You can import historical AWS usage from Cost Explorer as a baseline, then modify usage patterns to model future scenarios. This is particularly useful for modeling "what if" scenarios like adding Savings Plans coverage or changing regions.
Export your estimates to CSV, JSON, or share via unique public links for stakeholder review.
Pricing and Limitations
Workload estimates are completely free with no limits. Bill estimates give you 5 free per month, then $2 each.
Key limitations to keep in mind:
- Estimates are point-in-time; AWS prices can change
- Does not include taxes (add manually based on your jurisdiction)
- Does not include AWS Support charges
- Does not include third-party licensing fees from Marketplace
- Promotional credits aren't factored in by default
Practical Example: Estimating a Web Application
Let's walk through estimating a typical three-tier web application: EC2 instances behind an Application Load Balancer, an RDS database, and S3 for static assets.
Start by adding your EC2 instances. Specify the instance type, operating system, and expected utilization. Don't forget to include EBS storage volumes. A common mistake is underestimating EBS costs, especially for database workloads.
Add your RDS instance with the appropriate engine, instance class, and storage type. Include Multi-AZ if you need high availability.
For S3, estimate storage volume and request patterns. Data transfer is often underestimated. Include outbound data transfer to the internet and consider NAT Gateway costs if your application runs in private subnets.
Once you have a baseline, experiment with commitment modeling. Add a Compute Savings Plan and see how it impacts the monthly estimate. This helps justify commitment purchases to stakeholders.
CloudBurn: Pre-Deployment Cost Estimation for IaC
CloudBurn focuses on helping you automatically estimate AWS costs before deploying new infrastructure. We integrate directly into your pull request workflow for Terraform and AWS CDK, so you know what to expect on your bill before changes hit production.
As part of this mission to improve pre-deployment cost visibility, we also built a suite of free AWS pricing calculators. Why? Because the standard AWS pricing pages are complex and not intuitive. Our calculators show all costs in a single overview with clear specifications and price model comparisons side-by-side:
- EC2 Pricing Calculator - Compare On-Demand, Spot, and Reserved pricing with smart instance recommendations
- Lambda Pricing Calculator - Serverless costs with Compute Savings Plans and Provisioned Concurrency modeling
- S3 Pricing Calculator - All storage classes compared with visual free tier tracking
- RDS Pricing Calculator - Every database engine with Single-AZ vs Multi-AZ and Reserved Instance options
- Fargate Pricing Calculator - Container costs with ARM/Graviton savings analysis
The full suite also covers Aurora, EBS, ECS, ElastiCache, API Gateway, and CodeBuild. Use them for quick single-service estimates, while AWS Pricing Calculator handles complex multi-service workloads with discount modeling.
Real-Time Cost Monitoring with AWS Cost Explorer
Once deployed, Cost Explorer becomes your primary tool for understanding actual spend. It provides historical analysis going back 13 months (or 38 months with enhanced settings) and can forecast up to 18 months into the future.
Cost Explorer data refreshes at least once every 24 hours. Current month costs typically appear within 24 hours of service usage.
Data Availability and Granularity Options
You have three granularity levels:
- Monthly - Default view, up to 13 months historical data
- Daily - Up to 13 months, useful for spotting day-over-day trends
- Hourly - Previous 14 days only, requires opt-in, incurs charges
Hourly granularity costs $0.01 per 1,000 usage records monthly. For a single EC2 instance running continuously, that works out to about $0.003/month. It's worth enabling for production workloads where you need to correlate cost spikes with specific events.
18-Month Forecasting with AI-Powered Explanations
In November 2025, AWS extended the forecast horizon to 18 months and added AI-powered explanations. The forecasting engine uses up to 38 months of historical data to improve accuracy, accounting for seasonal patterns and usage trends.
The AI-powered explanations feature (currently in preview) provides natural language summaries of what's driving your forecasts. Instead of just seeing a projected number, you get context like "Forecast increased due to seasonal pattern matching last year's Q4 usage spike."
You need approximately 5 weeks of usage data before Cost Explorer can generate meaningful forecasts. The default prediction interval is 80%, meaning actual spend will fall within that range 80% of the time.
Advanced Analysis Features
Cost Explorer's filtering and grouping capabilities let you slice costs by service, linked account, region, tag, or cost category. You can combine multiple filters and save views for recurring analysis.
Resource-level granularity is available for specific services like EC2, letting you see costs for individual instances. This requires enabling resource IDs in your Cost and Usage Report.
Export to CSV for offline analysis. The export includes up to 15 decimal precision, which matters when you're doing detailed allocations.
Pricing: Console vs API Access
The Cost Explorer console UI is completely free. The API costs $0.01 per paginated request. If you're building dashboards or automation, factor in API costs based on your expected query volume.
Proactive Cost Controls with AWS Budgets
Monitoring is reactive. Budgets let you set proactive thresholds and take action before costs spiral. You can configure alerts for both actual and forecasted spend, and even automate responses like applying restrictive IAM policies.
Budget Types and Methods
AWS Budgets supports six budget types:
- Cost budgets - Track total AWS spend
- Usage budgets - Track specific metrics like GB transferred or hours used
- RI utilization budgets - Alert when Reserved Instance utilization drops
- RI coverage budgets - Alert when coverage falls below target
- Savings Plans utilization budgets - Track SP utilization
- Savings Plans coverage budgets - Track SP coverage
Three methods control how the budget amount is calculated:
- Fixed - Same amount every period
- Planned - Different amounts per period (seasonal budgets)
- Auto-adjusting - Automatically set based on historical or forecasted spending
Auto-adjusting budgets are particularly useful for growing organizations where fixed thresholds constantly need updating.
Alert Configuration and Notification Channels
Set thresholds as percentages of your budget or absolute dollar amounts. You can alert on actual costs (already incurred) or forecasted costs (projected to exceed).
Notification channels include email (up to 10 addresses per alert), Amazon SNS for integration with other systems, and Amazon Q Developer for chat notifications. Each budget supports up to 10 notifications, so you can escalate at different thresholds.
Automated Budget Actions
This is where Budgets becomes powerful. When a threshold is exceeded, you can automatically:
- Apply IAM policies to users, groups, or roles (restrict provisioning)
- Apply Service Control Policies to OUs or accounts (organization-wide restrictions)
- Stop EC2 instances in the same account
- Stop RDS instances in the same account
Actions can execute automatically or require manual approval. For production environments, I recommend approval-based actions to prevent unintended service disruption.
Budget Pricing Model
Standard monitoring budgets are unlimited and free. You can create as many cost and usage budgets as you need without charge.
Action-enabled budgets cost $0.10 per day per budget after the first 2 free. If you need automated governance actions, the cost is minimal compared to potential savings.
Budget reports, delivered via email on a schedule, cost $0.01 per report.
Detailed Cost Analysis with AWS Cost and Usage Report
When Cost Explorer's granularity isn't enough, the Cost and Usage Report provides the most detailed billing data AWS offers. CUR delivers comprehensive line-item data to S3, where you can analyze it with Athena, Redshift, or QuickSight.
CUR 2.0, released at re:Invent 2023, offers improved schema consistency and is the recommended format for new implementations.
CUR 2.0 Features and Customization
You control the granularity (hourly, daily, or monthly) and what data to include:
- Resource IDs - Individual resource identifiers for precise attribution
- Split cost allocation data - Container cost attribution for ECS/EKS workloads
- All identifiers - Maximum metadata inclusion
For file format, Parquet is strongly recommended over CSV. It's columnar, compressed, and typically 10-20% the size of equivalent CSV data. Athena queries run faster and cost less against Parquet.
Integration with Analytics Services
CUR data flows to S3, then integrates with your analytics stack:
AWS provides CloudFormation templates that automatically set up Athena integration, including Glue crawlers and pre-built queries.
Storage Costs and Optimization
Report generation and delivery are free. You pay for:
- S3 storage - Based on data volume and retention period
- Athena queries - $5.00 per TB scanned
- Redshift - If loading into a cluster
To minimize costs, use Parquet format (80-90% storage reduction), consider daily instead of hourly granularity if you don't need sub-day analysis, and partition your data by month for efficient Athena queries.
Catching Cost Spikes with AWS Cost Anomaly Detection
Budgets alert on thresholds you set. Cost Anomaly Detection uses machine learning to identify unusual spending patterns you might not have anticipated. It runs approximately 3 times per day and automatically adapts to your organization's growth.
The best part: it's completely free.
ML-Based Detection Mechanism
The service determines daily spend thresholds dynamically, accounting for organic growth and seasonal trends. You don't need to configure thresholds manually. The ML models learn your spending patterns and minimize false positives.
As your organization grows, detection automatically adjusts. New accounts, services, and tag values are evaluated as they appear.
Monitor Types: AWS Managed vs Customer Managed
AWS Managed monitors automatically evaluate all values within a dimension (services, linked accounts, tags, or cost categories). They adapt as you grow without manual updates. For most organizations, an AWS Services monitor provides comprehensive coverage.
Customer Managed monitors let you select specific values to monitor (up to 10 per monitor). Use these for:
- Monitoring specific project accounts together
- Different alert thresholds for different teams
- High-priority workloads requiring special attention
Best practice: Maintain an AWS Services monitor for aggregate visibility, then add customer managed monitors for specific use cases.
Alert Configuration and Response
Alerts can be delivered via email, SNS, Slack/Teams, or EventBridge (for automation). Each alert includes:
- Anomaly start date and duration
- Cost impact (dollar amount and percentage)
- Root cause analysis identifying the service, account, or resource
- Direct link to the console for investigation
You can set custom thresholds (dollar or percentage-based) to filter out noise. For larger accounts, you might only want alerts above $100 impact.
Identifying Savings with Cost Optimization Hub and Compute Optimizer
Once you're monitoring costs, the next question is: where can I save? Cost Optimization Hub and Compute Optimizer work together to surface optimization opportunities.
Both tools are completely free.
Cost Optimization Hub: Centralized Recommendations
Cost Optimization Hub aggregates over 15 types of recommendations across all your accounts and regions into a single dashboard. Categories include:
- Resource rightsizing - EC2, EBS, Lambda, Auto Scaling, RDS, Aurora
- Idle resource deletion - Unused EC2 instances, EBS volumes, RDS instances, ECS services
- Savings Plans - Compute, EC2 Instance, SageMaker
- Reserved Instances - EC2, RDS, Redshift, OpenSearch, ElastiCache, MemoryDB, DynamoDB
The service calculates estimated monthly savings accounting for your existing discounts and commitments. It also deduplicates recommendations, so if both "delete idle instance" and "rightsize instance" apply to the same resource, it prioritizes the higher-savings option.
Compute Optimizer: ML-Powered Rightsizing
Compute Optimizer analyzes utilization metrics from CloudWatch and provides rightsizing recommendations for:
- Amazon EC2 instances (C, D, G, Hpc, I, M, P, R, T, U, X, Z families)
- EC2 Auto Scaling groups
- Amazon EBS volumes
- AWS Lambda functions
- Amazon RDS (MySQL, PostgreSQL, including Aurora)
- Amazon ECS services on Fargate
Each resource gets a classification: under-provisioned, over-provisioned, optimized, or insufficient data. Recommendations include performance risk ratings so you can make informed decisions.
Understanding Optimization Strategies
Cost Optimization Hub groups recommendations into seven strategies:
| Strategy | Effort | Reversible | Example |
|---|---|---|---|
| Purchase Savings Plans | Very low | No | Buy Compute SP for EC2 |
| Purchase Reservations | Very low | No | Reserve RDS instance |
| Stop/Delete Resources | Low | Stop: Yes, Delete: No | Remove idle EBS volume |
| Scale In | Low | Yes | Reduce ASG desired capacity |
| Rightsize | Medium | Yes | Move to smaller instance |
| Upgrade | Medium | Yes | Migrate EBS io1 to io2 |
| Migrate to Graviton | Varies | Yes | Switch to ARM-based instances |
New Features: Cost Efficiency Score and Idle Detection
In 2025, AWS added the Cost Efficiency Score, a single metric representing potential savings over addressable spend. It gives you a quick health check across your environment.
Idle recommendations now identify unused resources across EC2, EBS, ECS Fargate, Auto Scaling groups, RDS, and NAT Gateways. These are integrated into Cost Optimization Hub's consolidated view.
Migration Cost Assessment with Migration Evaluator
If you're planning a migration from on-premises or another cloud, Migration Evaluator creates data-driven business cases. It's a free service with white-glove support from AWS program managers and solution architects.
When to Use Migration Evaluator
Consider Migration Evaluator when you need:
- Executive-ready business case materials
- TCO comparison between current state and AWS
- Rightsizing recommendations before migration
- Professional assessment support
The service outputs a business case PowerPoint deck, detailed Excel spreadsheet with projections, and TCO comparison.
Data Collection Methods
Migration Evaluator supports multiple data collection approaches:
- Agentless collectors for VMware, Hyper-V, and Nutanix environments
- Quick Insights for rapid initial assessment
- Manual Excel templates when automated collection isn't feasible
Data feeds into AWS Migration Hub for dependency mapping and migration wave planning.
Business Case Components
The full business case includes:
- TCO comparison (current state vs AWS)
- Migration and modernization cost estimates
- Cash flow analysis with multiple scenarios
- Financial metrics: NPV, payback period, ROI, MIRR
- IT operations productivity improvements
Migration Evaluator models different commitment strategies (On-Demand, Savings Plans, Reserved Instances) so you can see the impact of each approach.
Building a Cost Attribution Foundation with Tags
Every cost tool becomes more powerful with consistent tagging. Tags let you attribute costs to teams, projects, environments, and applications. Without them, you're flying blind.
AWS-Generated vs User-Defined Tags
AWS-generated tags are applied automatically by services. The aws:createdBy tag tracks which identity created a resource. These are prefixed with aws: in reports and can't be edited.
User-defined tags are your custom key-value pairs. They're prefixed with user: in reports and must be activated in the Billing console before appearing in cost reports (allow 24 hours after activation).
Account Tags (New in 2025)
Account Tags, announced in December 2025, are a game-changer for organizations using AWS Organizations. Instead of tagging individual resources, you tag accounts. Those tags automatically apply to all metered usage within the account.
This enables cost allocation for resources that can't be tagged directly (refunds, credits, certain service charges) and eliminates the need to configure account ID lists in Cost Explorer, Budgets, or Cost Categories.
Tagging Strategy Best Practices
Start with a defined taxonomy before implementation. Common tag categories:
- Business unit/team - Who owns this resource?
- Environment - dev, staging, production
- Project/application - What workload does this support?
- Cost center - For financial allocation
- Owner - Individual accountability
Enforce standards with Tag Policies in AWS Organizations. Remember to tag supporting resources too. An untagged EBS volume attached to a tagged EC2 instance creates attribution gaps.
Programmatic Cost Management with APIs and IaC
Manual cost management doesn't scale. APIs and Infrastructure as Code let you embed cost controls in your deployment workflows, following a shift-left FinOps approach.
Available Cost Management APIs
AWS provides comprehensive API access:
- Cost Explorer API - Query cost/usage data, retrieve forecasts
- Budgets API - Create, update, delete budgets programmatically
- Data Exports API - Configure CUR exports
- Pricing Calculator API - Generate estimates programmatically
- Cost Optimization Hub API - Query recommendations
- Price List API - Service pricing information
- Free Tier API - Check free tier usage
All APIs are available through the standard AWS SDKs (Python/boto3, JavaScript, Java, .NET, Go) and CLI.
AWS CDK Integration Examples
You can define budgets alongside your infrastructure using AWS CDK. Here's a TypeScript example:
import { aws_budgets as budgets } from 'aws-cdk-lib';
const monthlyBudget = new budgets.CfnBudget(this, 'MonthlyBudget', {
budget: {
budgetName: 'production-monthly-budget',
budgetType: 'COST',
timeUnit: 'MONTHLY',
budgetLimit: {
amount: 5000,
unit: 'USD'
}
},
notificationsWithSubscribers: [{
notification: {
notificationType: 'ACTUAL',
comparisonOperator: 'GREATER_THAN',
threshold: 80
},
subscribers: [{
subscriptionType: 'EMAIL',
address: 'finops@example.com'
}]
}]
});
For cost allocation, apply tags at the stack level:
import * as cdk from 'aws-cdk-lib';
const app = new cdk.App();
const stack = new cdk.Stack(app, 'ProductionStack');
cdk.Tags.of(stack).add('Environment', 'production');
cdk.Tags.of(stack).add('CostCenter', 'engineering');
cdk.Tags.of(stack).add('Project', 'web-platform');
For more patterns, see my guide on how to estimate AWS CDK costs before deployment.
Automating Cost Controls in CI/CD
The real power comes from integrating cost estimation into your deployment pipeline:
- Pre-deployment - Synthesize CDK to CloudFormation, import to Pricing Calculator for estimates
- Gate deployments - Fail PRs that exceed cost thresholds
- Auto-tag resources - Include deployment metadata (commit SHA, branch, deployer)
- Create monitors - Automatically provision budgets and anomaly monitors for new workloads
This shifts cost awareness left, making it part of code review rather than a monthly surprise.
Choosing the Right AWS Cost Estimation Tools
With eight tools available, which should you actually use? It depends on your organization's size, technical sophistication, and specific needs. This section covers AWS native tools. For a comprehensive comparison that includes third-party options like CloudZero, nOps, Infracost, and others, see my AWS cost estimation tools comparison.
Tool Comparison Matrix
| Tool | Primary Purpose | Cost | Best For |
|---|---|---|---|
| Pricing Calculator | Pre-deployment estimation | Free (workloads), $2/bill estimate after 5 | Planning new workloads |
| Cost Explorer | Historical analysis & forecasting | Console free, API $0.01/request | Understanding spending patterns |
| Budgets | Proactive alerts & automation | Monitoring free, actions $0.10/day | Preventing cost overruns |
| CUR | Detailed data export | Free (plus S3/Athena costs) | Custom analytics |
| Anomaly Detection | ML-based spike detection | Free | Catching unexpected increases |
| Cost Optimization Hub | Centralized recommendations | Free | Finding savings opportunities |
| Compute Optimizer | Rightsizing recommendations | Free | Optimizing compute resources |
| Migration Evaluator | Migration business cases | Free | Pre-migration planning |
Decision Framework by Use Case
Pre-deployment estimation: Start with Pricing Calculator for new workloads, Migration Evaluator for migrations.
Ongoing monitoring: Enable Cost Explorer (it can't be disabled once enabled), set up Budgets with email alerts, enable Anomaly Detection.
Finding optimization opportunities: Enable Cost Optimization Hub (requires AWS Organizations), review Compute Optimizer recommendations.
Custom analytics: Configure CUR with Parquet format, query with Athena or build QuickSight dashboards.
Governance: Combine Budgets with automated actions, enforce tagging with Tag Policies, use Cost Categories for hierarchical allocation.
Building Your Cost Management Stack
Starter stack (any organization):
- Cost Explorer for visibility
- Budgets with email alerts at 50%, 80%, 100%
- Anomaly Detection with AWS Services monitor
- Consistent tagging strategy
Comprehensive stack (mid-size to enterprise):
- Add CUR with Athena integration
- Enable Cost Optimization Hub
- Review Compute Optimizer recommendations monthly
- Implement programmatic budgets in IaC
Enterprise stack:
- All of the above
- QuickSight dashboards for stakeholder reporting
- EventBridge automation for anomaly response
- Consider third-party tools for multi-cloud or advanced RI/SP management
Conclusion
AWS provides a powerful, mostly free toolkit for cost estimation and management. The key is understanding that these tools work as an integrated ecosystem, not isolated utilities.
Start with the basics: enable Cost Explorer, set up Budgets with email alerts, and enable Anomaly Detection. That gives you immediate visibility and proactive controls at zero cost.
As you mature, add Cost Optimization Hub to find savings, CUR for custom analytics, and IaC integration to shift cost awareness into your development workflow.
The goal isn't just to track costs, it's to make cost-aware decisions at every stage of the infrastructure lifecycle. When you can estimate costs before deployment, catch anomalies in real-time, and surface optimization opportunities automatically, you've built a foundation for sustainable cloud financial management.
What's your current cost management setup? Drop a comment below if you have questions about any of these tools or want to share what's worked for your team.
See Infrastructure Costs in Code Review, Not on Your AWS Bill
CloudBurn automatically analyzes your Terraform and AWS CDK changes, showing cost estimates directly in pull requests. Catch expensive decisions during code review when they take seconds to fix, not weeks later in production.