AWS Lambda's New Tiered Logging Pricing: Unlocking new savings
AWS introduces tiered pricing for Lambda logs and new destinations, including S3 and Firehose. Learn how these changes can reduce costs.

Nishant Thorat
Founder
Just when you thought you'd optimized your Lambda costs to perfection, AWS has changed the game again. But unlike August's upcoming cold start billing changes that will increase costs, this one's actually good news for your wallet.
On May 1st, 2025, AWS announced significant changes to Lambda logging that introduce tiered pricing and new destination options. Here's everything you need to know about this shift that could dramatically reduce your logging costs while improving your observability strategy.
The Two Big Changes You Need to Know About
AWS has introduced two major updates to Lambda logging:
Let's break down what this means for your serverless architecture and your AWS bill.
What are Vended Logs?
Imagine your current Lambda logging setup as a highway with a single flat toll - no matter how much you travel, you pay the same rate per mile. With the new "Vended Logs" classification, AWS has introduced a progressive toll system where the rate per mile decreases the further you travel.
Starting May 1st, Lambda logs sent to CloudWatch Logs are now classified as "Vended Logs" — logs from specific AWS services that are available at volume-tiered pricing. This replaces the previous flat rate model.
What makes Vended Logs different?
- They come from AWS services (like Lambda) themselves
- They have volume-based tiered pricing that gets cheaper as you log more
- Pricing tiers apply across all vended logs consolidated, not per service
When reviewing your AWS bills after this change, look for the new "VendedLog-Bytes" line items instead of "DataProcessing-Bytes" for Lambda CloudWatch Logs.
Show Me The Money: Cost Comparison
Let's look at the cold, hard numbers. Here's the new tiered pricing structure for CloudWatch Logs in US East (N. Virginia):
| Data Ingested | CloudWatch Logs Standard | CloudWatch Logs Infrequent Access |
| First 10 TB per month | $0.50 per GB | $0.25 per GB |
| Next 20 TB per month | $0.25 per GB | $0.15 per GB |
| Next 20 TB per month | $0.10 per GB | $0.075 per GB |
| Over 50 TB per month | $0.05 per GB | $0.05 per GB |
And for the new S3 and Firehose destinations:
| Data Ingested | Delivery Cost to Amazon S3 | Delivery Cost to Amazon Data Firehose |
| First 10TB per month | $0.25 per GB | $0.25 per GB |
| Next 20TB per month | $0.15 per GB | $0.15 per GB |
| Next 20TB per month | $0.075 per GB | $0.075 per GB |
| Over 50TB per month | $0.05 per GB | $0.05 per GB |
But what does this actually mean for your monthly bill? Let's compare old vs. new pricing at different volume levels:
| Log Volume | Old Pricing | New Pricing | Savings | Savings % |
| 1 TB | $512 | $512 | $0 | 0% |
| 10 TB | $5,120 | $5,120 | $0 | 0% |
| 20 TB | $10,240 | $7,680 | $2,560 | 25% |
| 30 TB | $15,360 | $10,240 | $5,120 | 33.3% |
| 40 TB | $20,480 | $11,264 | $9,216 | 45% |
| 50 TB | $25,600 | $12,288 | $13,312 | 52% |
| 60 TB | $30,720 | $12,800 | $17,920 | 58.3% |
The pattern is clear: no savings until you exceed 10TB of logs per month, but then the savings accelerate dramatically. By the time you hit 60TB, you're saving nearly 60% of your logging costs compared to the old model.
This is great news for companies with large Lambda deployments. If you're already spending thousands on Lambda logging, you could see substantial savings without having to change a single line of code. It's like getting a retroactive volume discount.
New Destinations: Beyond CloudWatch
While the pricing changes are exciting, the addition of S3 and Firehose as logging destinations might be even more significant for your architecture.
Think of these new options like having multiple transportation choices instead of just one:
When using S3 or Firehose, you start with lower costs ($0.25/GB vs $0.50/GB for CloudWatch) and still benefit from the same tiered pricing structure as volume increases.
Use Cases Unlocked
These changes enable several powerful scenarios that were previously cost-prohibitive:
1. Long-term Compliance Archiving
If you've ever tried to retain CloudWatch Logs for compliance reasons, you know how expensive it gets. S3 with lifecycle policies is far more cost-effective for regulatory retention requirements.
You can transition older logs to Glacier or Glacier Deep Archive, bringing storage costs down to pennies per GB. For regulated industries that need to retain logs for years, this is a game-changer.
2. Advanced Analytics
Sending logs directly to S3 makes it easier to run Athena queries or use AWS Glue for log analysis at scale. Your logs become immediately available for SQL queries through Athena or complex ETL jobs through Glue, without having to first export them from CloudWatch.
This opens up possibilities for deeper insights than what CloudWatch Logs Insights can provide, especially for complex pattern recognition across massive log volumes.
3. Third-party Observability Integration
Many teams use third-party observability platforms like Datadog, Splunk, New Relic, or Sumo Logic. Previously, you had to set up complex forwarding solutions:
CloudWatch Logs → Lambda → Third-party platformNow, Firehose can stream logs directly to these platforms without intermediary Lambda functions. This eliminates both complexity and the cost of running those forwarding functions.
4. OpenSearch Integration
The Firehose integration with OpenSearch is particularly valuable. It offers:
- Better search capabilities than standard CloudWatch
- Advanced visualization and analysis tools
- Integration with your existing monitoring stack
5. Custom Processing Pipelines
For organizations with specific log enrichment or filtering requirements, direct Firehose delivery provides
Implementation Considerations
If you're considering taking advantage of these changes, here are some key points to consider:
Evaluate Your Current Setup
If you have an existing CloudWatch to S3 forwarder, the direct integration may be superior:
- Reduced complexity: No separate forwarder Lambda to maintain
- Cost efficiency: Lower per-GB costs and no Lambda execution charges
- Improved reliability: Removal of a potential failure point
- Consistent delivery: Matching the established pattern used by other AWS compute services
Adjust Budget Forecasts
These changes could have a material impact for heavy Lambda users. Review your CloudWatch Logs costs for Lambda to determine your GB/month usage, then apply the new pricing tiers to estimate future costs.
If you're processing more than 10TB of Lambda logs monthly, this is definitely worth your attention.
How This Fits into Your Overall Lambda Optimization Strategy
This logging update complements other Lambda cost optimization strategies we've covered in our Ultimate Guide to AWS Lambda Cost Optimization.
While you're implementing these logging changes, don't forget about:
- Right-sizing memory allocations: Use AWS Lambda Power Tuning to find your function's sweet spot between performance and cost
- Optimizing initialization costs: Especially important with AWS's upcoming August 2025 billing changes that will start charging for cold starts
- Implementing event filtering: Reduce unnecessary invocations by filtering at the event source
- Connection reuse: Maintain database connections and HTTP clients across invocations
Best Practices for the New Logging Paradigm
To maximize the benefits of these changes:
- Segment your logging strategy by purpose:Send critical production logs to CloudWatch Logs for real-time analytics and alertingUse S3 for logs requiring long-term retentionUse Firehose for integration with existing monitoring tools
- Send critical production logs to CloudWatch Logs for real-time analytics and alerting
- Use S3 for logs requiring long-term retention
- Use Firehose for integration with existing monitoring tools
- Monitor log volumes regularly to understand the impact of pricing tiers
- Implement appropriate retention policies to prevent unnecessary storage costs
- Consider different logging strategies across development, staging, and production environments
- Review logging levels to ensure you're not paying for excessive debug information
Ready to Implement? Here's How
- Setting up new logging destinations in AWS Lambda is straightforward:
- Navigate to the Lambda console and select your function
- In the Configuration tab, select "Monitoring and operations tools"
- Select "Edit" in the Logging configuration
- Choose your preferred destination (S3 or Firehose)
- Configure the delivery log group and appropriate IAM permissions
You can also use AWS CLI, CloudFormation, or CDK to implement these changes at scale across your Lambda functions.
Conclusion
AWS Lambda's new logging paradigm represents a significant evolution in how we manage observability for serverless applications. The combination of tiered pricing and flexible destinations means you can maintain comprehensive logging practices without breaking the bank.
For organizations with large Lambda deployments, these changes could translate to significant cost savings with minimal implementation effort. The new model rewards scale while providing greater flexibility in how logs are stored, processed, and analyzed.
Coming just months before the Lambda cold start billing changes that will increase costs for many users, this logging update provides a welcome opportunity to optimize your serverless spending.
If you haven't already, now is the time to review your Lambda logging strategy and consider how these new options might improve both your observability posture and your bottom line.
Want more AWS cost optimization insights? Check out our Ultimate Guide to AWS Lambda Cost Optimization for a comprehensive approach to keeping your serverless costs in check.
Need help implementing these changes or optimizing your overall AWS spend? Contact the CloudYali team for a personalized consultation.