DynamoDB Pricing Fundamentals
DynamoDB pricing has two modes: on-demand and provisioned. On-demand charges per read/write request unit — $1.25 per million write request units (WRU) and $0.25 per million read request units (RRU). Provisioned charges per hour for reserved capacity — $0.00065/WCU/hr and $0.00013/RCU/hr.
At consistent traffic levels, provisioned capacity is 5-7x cheaper than on-demand for the same throughput. The tradeoff is managing capacity and handling throttling. Auto-scaling bridges this gap for most production workloads.
On-Demand vs Provisioned Cost Comparison
Scenario: 1,000 writes/sec + 10,000 reads/sec sustained (730 hours/month)
ON-DEMAND:
Writes: 1,000/s × 60s × 60min × 730hr = 2,628M writes × $1.25/1M = $3,285/month
Reads: 10,000/s × 60s × 60min × 730hr = 26,280M reads × $0.25/1M = $6,570/month
Total: $9,855/month
PROVISIONED (manual):
Write: 1,000 WCU × 730hr × $0.00065/hr = $474.50/month
Read: 10,000 RCU × 730hr × $0.00013/hr = $949.00/month
Total: $1,423.50/month
SAVING: $8,431.50/month (86%)
PROVISIONED with Reserved Capacity (1-yr):
Write: 1,000 WCU × $0.000507/hr = $370.11/month
Read: 10,000 RCU × $0.000101/hr = $737.30/month
Total: $1,107.41/month
SAVING vs on-demand: $8,747.59/month (89%)
RULE: Use On-Demand for:
- Unpredictable traffic spikes
- New tables (while learning traffic patterns)
- Dev/test tables with low, sporadic traffic
- Tables accessed <18 hours/day
Use Provisioned for:
- Consistent, predictable traffic
- High-throughput production tables
- After you have 2+ weeks of CloudWatch metrics
DynamoDB Auto-Scaling with Terraform
resource "aws_dynamodb_table" "orders" {
name = "orders"
billing_mode = "PROVISIONED"
read_capacity = 100 # Minimum — auto-scaling handles peaks
write_capacity = 50
hash_key = "order_id"
range_key = "created_at"
attribute {
name = "order_id"
type = "S"
}
attribute {
name = "created_at"
type = "S"
}
# Auto-scaling for reads
lifecycle {
ignore_changes = [read_capacity, write_capacity]
}
}
# Read capacity auto-scaling
resource "aws_appautoscaling_target" "reads" {
max_capacity = 10000
min_capacity = 100
resource_id = "table/aws_dynamodb_table.orders.name"
scalable_dimension = "dynamodb:table:ReadCapacityUnits"
service_namespace = "dynamodb"
}
resource "aws_appautoscaling_policy" "reads" {
name = "reads-auto-scaling"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.reads.resource_id
scalable_dimension = aws_appautoscaling_target.reads.scalable_dimension
service_namespace = aws_appautoscaling_target.reads.service_namespace
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "DynamoDBReadCapacityUtilization"
}
target_value = 70 # Scale at 70% utilization
scale_in_cooldown = 300
scale_out_cooldown = 60
}
}
# Same for writes
resource "aws_appautoscaling_target" "writes" {
max_capacity = 5000
min_capacity = 50
resource_id = "table/aws_dynamodb_table.orders.name"
scalable_dimension = "dynamodb:table:WriteCapacityUnits"
service_namespace = "dynamodb"
}
resource "aws_appautoscaling_policy" "writes" {
name = "writes-auto-scaling"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.writes.resource_id
scalable_dimension = aws_appautoscaling_target.writes.scalable_dimension
service_namespace = aws_appautoscaling_target.writes.service_namespace
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "DynamoDBWriteCapacityUtilization"
}
target_value = 70
scale_in_cooldown = 300
scale_out_cooldown = 60
}
}
DAX: When Caching Saves Money
DAX (DynamoDB Accelerator) cost analysis:
DAX cluster: dax.r6g.large (3-node)
Cost: 3 nodes × $0.171/hr × 730hr = $374.49/month
Without DAX — direct DynamoDB reads:
10,000 reads/sec × 730hr × 3600s/hr × $0.25/1M RRU = $6,570/month
With DAX (cache hit rate 90%):
1,000 DynamoDB reads/sec (cache misses)
= 2,628M reads × $0.25/1M = $657/month
+ DAX cluster: $374.49/month
Total: $1,031.49/month
Saving: $5,538.51/month (84%)
DAX is worth it when:
✅ Cache hit rate > 50% (mostly reads, same items repeatedly)
✅ DynamoDB read cost > $400/month (DAX cluster minimum)
✅ Low latency is critical (DAX: microseconds vs DynamoDB: milliseconds)
DAX is NOT worth it:
❌ Write-heavy workloads (DAX doesn't cache writes)
❌ Each request reads unique items (0% cache hit rate)
❌ DynamoDB reads < $400/month (DAX costs more than it saves)
TTL: Automatic Data Expiry at No Cost
import boto3
from datetime import datetime, timedelta, timezone
import time
dynamodb = boto3.client('dynamodb', region_name='us-east-1')
# Enable TTL on a table
dynamodb.update_time_to_live(
TableName='sessions',
TimeToLiveSpecification={
'Enabled': True,
'AttributeName': 'expires_at' # Must be Unix timestamp (epoch seconds)
}
)
# When writing items, set TTL
def create_session(user_id: str, session_token: str, ttl_hours: int = 24):
"""Create a session that auto-expires after ttl_hours."""
expires_at = int((datetime.now(timezone.utc) + timedelta(hours=ttl_hours)).timestamp())
dynamodb.put_item(
TableName='sessions',
Item={
'session_id': {'S': session_token},
'user_id': {'S': user_id},
'created_at': {'S': datetime.now(timezone.utc).isoformat()},
'expires_at': {'N': str(expires_at)} # TTL attribute
}
)
# TTL deletes items for FREE (no WCU consumed)
# DynamoDB deletes expired items within 48 hours
# Cost saving: if you write 100K sessions/day and previously had a cleanup job:
# Old: 100K reads + 100K deletes = 200K WCU/day = significant cost
# TTL: FREE (system-managed deletion)
GSI Cost Multiplier: The Hidden Expense
Global Secondary Index (GSI) Cost Impact:
Each GSI:
- Duplicates the data (full copy of projected attributes)
- Has its own separate read/write capacity
- Every write to the base table triggers writes to ALL GSIs
Example: Table with 3 GSIs, 1,000 writes/sec
Base table WCU: 1,000
GSI-1 WCU: 1,000
GSI-2 WCU: 1,000
GSI-3 WCU: 1,000
Total WCU needed: 4,000 (4x base cost)
Monthly at provisioned rate:
4,000 WCU × 730hr × $0.00065 = $1,898/month
vs 1,000 WCU = $474.50/month
GSI overhead: $1,423.50/month (3x)
Optimization strategies:
1. Use sparse GSIs: only project needed attributes
Full projection copies all attributes (expensive)
KEYS_ONLY projects only PK/SK (cheapest)
INCLUDE projects specific attributes (balanced)
2. Delete unused GSIs — check CloudWatch ConsumedReadCapacityUnits
If a GSI has 0 reads for 30+ days, delete it
3. Single-table design: use SK overloading to avoid GSIs for
many access patterns
Standard vs Standard-IA Table Class
# DynamoDB Standard-IA (Infrequent Access)
# - Storage cost: $0.10/GB vs $0.25/GB (60% cheaper storage)
# - Read/write cost: same as Standard
# - Best for: tables rarely accessed but need fast performance when accessed
resource "aws_dynamodb_table" "audit_logs" {
name = "audit-logs"
billing_mode = "PAY_PER_REQUEST"
table_class = "STANDARD_INFREQUENT_ACCESS" # 60% storage savings
hash_key = "event_id"
range_key = "timestamp"
attribute {
name = "event_id"
type = "S"
}
attribute {
name = "timestamp"
type = "S"
}
# TTL for auto-cleanup of old audit logs
ttl {
attribute_name = "expires_at"
enabled = true
}
}
# Use Standard-IA when:
# - Table accessed < once per month on average
# - Large data volume (cost savings scale with GB stored)
# - Historical data, audit logs, archived records
Conclusion
DynamoDB cost optimization delivers compound savings. Switch from on-demand to provisioned with auto-scaling (86% cost reduction at consistent throughput), add DAX for read-heavy tables (84% read cost reduction at 90% cache hit rate), enable TTL for ephemeral data (eliminates cleanup job costs), right-size GSIs by using sparse projections and deleting unused indexes, and switch infrequently accessed tables to Standard-IA class (60% storage savings).
The typical result of applying all these optimizations is a 60-80% reduction in DynamoDB spend with no loss in performance or functionality.
Alex Thompson
CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.