Redis started as a simple key-value cache in 2009. In 2026, it powers critical infrastructure at every scale β from solo developer side projects to systems handling millions of operations per second at Twitter, GitHub, StackOverflow, and Instagram. Redis is consistently one of the most loved technologies in developer surveys, and for good reason: it's fast (sub-millisecond latency), versatile (10+ data structures), simple (all commands are well-documented), and reliable (battle-tested in the most demanding production environments).
Yet most developers use Redis as a simple key-value store β SET, GET, DELETE β and miss the vast majority of its capabilities. Redis Sorted Sets can implement leaderboards. Redis Streams can replace Kafka for many use cases. Redis Pub/Sub enables real-time messaging. Redis Lua scripting provides atomic multi-step operations. This guide covers all of it.
Chapter 1: Redis Data Structures β The Complete Tour
Strings
Strings are the simplest Redis data type. A string value can be at most 512 MB. Despite the name, Redis strings can store any binary data β text, JSON, serialized objects, images, or even integers.
# Basic operations
SET user:1:name "Alice Johnson"
GET user:1:name # "Alice Johnson"
# Set with expiration (TTL)
SET session:abc123 "user_data" EX 3600 # Expires in 1 hour
SET session:abc123 "user_data" PX 60000 # Expires in 60 seconds (ms)
# Set only if not exists (distributed lock pattern)
SET lock:resource1 "owner:server1" NX EX 30
# Returns OK if lock acquired, nil if already locked
# Atomic increment/decrement (counters)
SET page:views 0
INCR page:views # 1
INCR page:views # 2
INCRBY page:views 10 # 12
DECR page:views # 11
# Multiple get/set (fewer round trips)
MSET user:1:name "Alice" user:1:email "alice@example.com" user:1:role "admin"
MGET user:1:name user:1:email user:1:role
# ["Alice", "alice@example.com", "admin"]
Hashes
Hashes are maps of field-value pairs β perfect for representing objects. They're more memory-efficient than storing each field as a separate string key.
# Store a user object as a hash
HSET user:1 name "Alice Johnson" email "alice@example.com" role "admin" login_count 42
# Get individual fields
HGET user:1 name # "Alice Johnson"
HGET user:1 email # "alice@example.com"
# Get all fields
HGETALL user:1
# {name: "Alice Johnson", email: "alice@example.com",
# role: "admin", login_count: "42"}
# Get multiple fields at once
HMGET user:1 name email # ["Alice Johnson", "alice@example.com"]
# Atomic increment on a hash field
HINCRBY user:1 login_count 1 # 43
# Check if field exists
HEXISTS user:1 phone # 0 (false)
# Delete a field
HDEL user:1 role
# Get number of fields
HLEN user:1 # 3
Lists
Redis Lists are linked lists of strings. They support push/pop from both ends, making them suitable for queues, stacks, and activity feeds.
# Push to list (left = front, right = back)
LPUSH notifications:user1 "New message from Bob"
LPUSH notifications:user1 "Your order shipped"
RPUSH activity:feed "User Alice logged in"
# Pop from list
LPOP notifications:user1 # "Your order shipped"
RPOP activity:feed # "User Alice logged in"
# Blocking pop (wait for items β perfect for job queues)
BLPOP queue:emails 30 # Wait up to 30 seconds for an item
# Get range (pagination)
LRANGE notifications:user1 0 9 # First 10 items
LRANGE notifications:user1 0 -1 # All items
# Get list length
LLEN notifications:user1
# Trim list (keep only recent items)
LTRIM activity:feed 0 99 # Keep only the latest 100 items
Sets
Sets are unordered collections of unique strings. They support set operations (union, intersection, difference) β useful for tagging, unique visitor tracking, and social features.
# Add members to a set
SADD tags:post:1 "redis" "database" "caching" "performance"
SADD tags:post:2 "redis" "docker" "devops"
# Check membership
SISMEMBER tags:post:1 "redis" # 1 (true)
SISMEMBER tags:post:1 "python" # 0 (false)
# Get all members
SMEMBERS tags:post:1
# {"redis", "database", "caching", "performance"}
# Set operations
SINTER tags:post:1 tags:post:2 # {"redis"} (common tags)
SUNION tags:post:1 tags:post:2 # All unique tags from both posts
SDIFF tags:post:1 tags:post:2 # Tags in post 1 but not post 2
# Count unique visitors per day
SADD visitors:2026-03-12 "user:1" "user:2" "user:3" "user:1"
SCARD visitors:2026-03-12 # 3 (unique count)
Sorted Sets
Sorted Sets are like Sets, but each member has a numeric score. Members are automatically sorted by score. This makes them perfect for leaderboards, ranking systems, rate limiters, and time-series data.
# Leaderboard
ZADD leaderboard 1500 "player:alice"
ZADD leaderboard 2300 "player:bob"
ZADD leaderboard 1800 "player:charlie"
ZADD leaderboard 2100 "player:diana"
# Top 3 players (highest score first)
ZREVRANGE leaderboard 0 2 WITHSCORES
# ["player:bob", "2300", "player:diana", "2100", "player:charlie", "1800"]
# Player's rank (0-indexed)
ZREVRANK leaderboard "player:alice" # 3 (4th place)
# Player's score
ZSCORE leaderboard "player:alice" # "1500"
# Increment score
ZINCRBY leaderboard 500 "player:alice" # 2000
# Count players with score between 1500 and 2500
ZCOUNT leaderboard 1500 2500 # 4
# Remove players below a score threshold
ZREMRANGEBYSCORE leaderboard -inf 1000
Chapter 2: Caching Patterns
Cache-Aside (Lazy Loading)
The most common caching pattern. The application checks the cache first. On a cache miss, it loads data from the database, stores it in the cache, and returns it.
# Python implementation
import redis
import json
r = redis.Redis(host='localhost', port=6379, db=0)
def get_user_profile(user_id: int) -> dict:
cache_key = f"user:profile:{user_id}"
# Try cache first
cached = r.get(cache_key)
if cached:
return json.loads(cached)
# Cache miss β load from database
user = db.query("SELECT * FROM users WHERE id = %s", (user_id,))
if not user:
return None
profile = {
"id": user.id,
"name": user.name,
"email": user.email,
"plan": user.plan,
}
# Store in cache with 10-minute TTL
r.setex(cache_key, 600, json.dumps(profile))
return profile
# Invalidation: delete cache when user data changes
def update_user_profile(user_id: int, data: dict):
db.execute("UPDATE users SET ... WHERE id = %s", (user_id,))
r.delete(f"user:profile:{user_id}") # Invalidate cache
Write-Through Cache
Every write goes to both the cache and the database simultaneously. This ensures the cache is always up-to-date but adds latency to write operations.
# Python write-through pattern
def save_user_profile(user_id: int, data: dict):
# Write to database
db.execute(
"UPDATE users SET name=%s, email=%s WHERE id=%s",
(data["name"], data["email"], user_id)
)
# Write to cache (same data, same time)
cache_key = f"user:profile:{user_id}"
r.setex(cache_key, 600, json.dumps(data))
return data
Cache Stampede Prevention
When a popular cache key expires, hundreds of concurrent requests all experience a cache miss simultaneously and all query the database at once. This is called a cache stampede (or thundering herd).
# Solution 1: Probabilistic early expiration
import random
def get_with_early_expiration(key, ttl, fetch_func):
cached = r.get(key)
remaining_ttl = r.ttl(key)
# If cache exists but is about to expire, probabilistically refresh
if cached and remaining_ttl > 0:
# Higher probability of refresh as TTL approaches 0
early_expiry_probability = max(0, 1 - (remaining_ttl / ttl))
if random.random() > early_expiry_probability:
return json.loads(cached)
# Cache miss or selected for early refresh
# Use a lock to prevent stampede
lock_key = f"lock:{key}"
if r.set(lock_key, "1", nx=True, ex=10): # 10-second lock
try:
data = fetch_func()
r.setex(key, ttl, json.dumps(data))
return data
finally:
r.delete(lock_key)
else:
# Another process is refreshing, use stale data
if cached:
return json.loads(cached)
# No stale data available, wait and retry
import time
time.sleep(0.1)
return get_with_early_expiration(key, ttl, fetch_func)
Chapter 3: Redis as a Message Broker β Pub/Sub
Redis Pub/Sub provides fire-and-forget messaging between publishers and subscribers. It's perfect for real-time notifications, chat systems, live updates, and inter-service communication.
# Publisher (sends messages)
PUBLISH notifications:user1 '{"type":"message","from":"bob","text":"Hello!"}'
PUBLISH channel:general '{"type":"announcement","text":"Server maintenance at 3 PM"}'
# Subscriber (receives messages)
SUBSCRIBE notifications:user1
# Blocks and prints every message published to this channel
# Pattern subscription (subscribe to multiple channels)
PSUBSCRIBE notifications:*
# Receives messages from ALL notification channels
# Python Pub/Sub implementation
import redis
import json
import threading
r = redis.Redis(host='localhost', port=6379, db=0)
# Publisher
def send_notification(user_id: str, notification: dict):
channel = f"notifications:{user_id}"
r.publish(channel, json.dumps(notification))
# Subscriber (runs in a separate thread or process)
def listen_for_notifications(user_id: str, callback):
pubsub = r.pubsub()
pubsub.subscribe(f"notifications:{user_id}")
for message in pubsub.listen():
if message['type'] == 'message':
data = json.loads(message['data'])
callback(data)
# Usage
def handle_notification(data):
print(f"Received: {data}")
# Start listener in background
thread = threading.Thread(
target=listen_for_notifications,
args=("user1", handle_notification),
daemon=True
)
thread.start()
# Send a notification
send_notification("user1", {
"type": "message",
"from": "bob",
"text": "Hello!"
})
Chapter 4: Redis Streams β Event Sourcing and Log Processing
Redis Streams, introduced in Redis 5.0, provide a log-like data structure that supports consumer groups, acknowledgment, and replay. Streams can replace Kafka for many use cases where you need durable, ordered message processing.
# Add events to a stream
XADD events:orders * action "created" order_id "ord_123" amount "99.99" customer "user:1"
XADD events:orders * action "paid" order_id "ord_123" payment_id "pay_456"
XADD events:orders * action "shipped" order_id "ord_123" tracking "TRK789"
# Read all events
XRANGE events:orders - +
# Read events from a specific time
XRANGE events:orders 1709251200000-0 +
# Read the latest N events
XREVRANGE events:orders + - COUNT 10
# Consumer Groups β distribute processing across workers
XGROUP CREATE events:orders order-processors $ MKSTREAM
# Consumer 1 reads (blocks waiting for new events)
XREADGROUP GROUP order-processors consumer-1 COUNT 1 BLOCK 5000 STREAMS events:orders >
# Consumer 2 reads (gets DIFFERENT events β load balanced)
XREADGROUP GROUP order-processors consumer-2 COUNT 1 BLOCK 5000 STREAMS events:orders >
# Acknowledge processing (mark as done)
XACK events:orders order-processors 1709251200000-0
# Check pending (unacknowledged) events
XPENDING events:orders order-processors
# Claim abandoned events (when a consumer crashes)
XCLAIM events:orders order-processors consumer-2 60000 1709251200000-0
# Python Redis Streams consumer
import redis
r = redis.Redis(host='localhost', port=6379, db=0)
# Create consumer group (idempotent)
try:
r.xgroup_create('events:orders', 'processors', id='0', mkstream=True)
except redis.exceptions.ResponseError:
pass # Group already exists
# Consumer loop
consumer_name = 'worker-1'
while True:
# Read new events (block for 5 seconds)
events = r.xreadgroup(
'processors', consumer_name,
{'events:orders': '>'},
count=10, block=5000
)
if not events:
continue
for stream, messages in events:
for msg_id, data in messages:
try:
process_order_event(data)
# Acknowledge successful processing
r.xack('events:orders', 'processors', msg_id)
except Exception as e:
print(f"Failed to process {msg_id}: {e}")
# Don't acknowledge β will be retried
Chapter 5: Rate Limiting with Redis
# Sliding window rate limiter using Sorted Sets
import time
def is_rate_limited(user_id: str, limit: int = 100, window: int = 60) -> bool:
"""
Allow 'limit' requests per 'window' seconds per user.
Uses a sliding window for accurate rate limiting.
"""
key = f"ratelimit:{user_id}"
now = time.time()
window_start = now - window
pipe = r.pipeline()
# Remove old entries outside the window
pipe.zremrangebyscore(key, '-inf', window_start)
# Count requests in the current window
pipe.zcard(key)
# Add the current request
pipe.zadd(key, {str(now): now})
# Set expiry on the key (cleanup)
pipe.expire(key, window)
results = pipe.execute()
request_count = results[1]
if request_count >= limit:
return True # Rate limited
return False # Allowed
# Usage in middleware
def rate_limit_middleware(request):
user_id = request.user.id or request.remote_addr
if is_rate_limited(user_id, limit=100, window=60):
return Response(
{"error": "Rate limited. Try again later."},
status=429,
headers={"Retry-After": "60"}
)
return process_request(request)
Chapter 6: Redis Lua Scripting β Atomic Operations
Redis executes Lua scripts atomically β no other command can run while a Lua script is executing. This is essential for operations that require multiple commands to be atomic (like check-and-set, compare-and-swap, or complex conditional logic).
# Lua script: Atomic "deduct balance if sufficient"
# This cannot be done safely with separate GET and SET commands
# because another request could modify the balance between them.
EVAL "
local balance = tonumber(redis.call('GET', KEYS[1]) or '0')
local amount = tonumber(ARGV[1])
if balance >= amount then
redis.call('DECRBY', KEYS[1], amount)
return 1 -- Success
else
return 0 -- Insufficient balance
end
" 1 user:1:balance 50
# Load script for reuse (avoids sending the full script each time)
SCRIPT LOAD "local balance = tonumber(redis.call('GET', KEYS[1]) or '0') ..."
# Returns: SHA1 hash
EVALSHA "sha1_hash_here" 1 user:1:balance 50
Chapter 7: Redis Cluster and Sentinel β High Availability
Redis Sentinel (Automatic Failover)
Redis Sentinel monitors Redis instances and automatically promotes a replica to master if the master fails. This provides high availability without manual intervention.
# sentinel.conf
port 26379
sentinel monitor mymaster 10.0.1.10 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
# "2" = quorum: number of Sentinels that must agree
# a master is down before failover triggers
# Run three Sentinel instances for reliability
# (on different servers)
Redis Cluster (Horizontal Scaling)
# Redis Cluster distributes data across multiple nodes
# using hash slots (16384 total slots)
# Create a 6-node cluster (3 masters + 3 replicas)
redis-cli --cluster create 10.0.1.10:6379 10.0.1.11:6379 10.0.1.12:6379 10.0.1.13:6379 10.0.1.14:6379 10.0.1.15:6379 --cluster-replicas 1
# Check cluster status
redis-cli --cluster info 10.0.1.10:6379
# Add a new node
redis-cli --cluster add-node 10.0.1.16:6379 10.0.1.10:6379
# Reshard (redistribute hash slots)
redis-cli --cluster reshard 10.0.1.10:6379
Chapter 8: Production Operations
Memory Management
# Check memory usage
INFO memory
# used_memory: 1073741824 (1 GB)
# used_memory_peak: 2147483648 (2 GB peak)
# maxmemory: 4294967296 (4 GB limit)
# Set memory limit
CONFIG SET maxmemory 4gb
# Set eviction policy (what happens when memory is full)
CONFIG SET maxmemory-policy allkeys-lru
# Options:
# noeviction β Return errors on writes (safest, but breaks your app)
# allkeys-lru β Evict least recently used keys (best for caches)
# allkeys-lfu β Evict least frequently used keys
# volatile-lru β Evict LRU keys that have an expiration set
# volatile-ttl β Evict keys with shortest TTL
# Analyze key sizes (find memory hogs)
redis-cli --bigkeys
# Scans the database and reports the largest keys of each type
Persistence Configuration
# redis.conf β Persistence settings
# RDB snapshots (periodic full dump)
save 900 1 # Save if at least 1 key changed in 900 seconds
save 300 10 # Save if at least 10 keys changed in 300 seconds
save 60 10000 # Save if at least 10000 keys changed in 60 seconds
# AOF (Append Only File) β logs every write operation
appendonly yes
appendfsync everysec # Sync to disk every second (good balance)
# appendfsync always # Sync after every write (slowest, safest)
# appendfsync no # Let OS decide (fastest, risk of data loss)
# AOF rewrite (compact the AOF file periodically)
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# For PURE CACHE use cases (no persistence needed):
# save ""
# appendonly no
Monitoring and Alerting
# Key metrics to monitor:
# 1. Memory usage (alert if > 80% of maxmemory)
redis-cli INFO memory | grep used_memory_human
# 2. Connected clients (alert if approaching maxclients)
redis-cli INFO clients | grep connected_clients
# 3. Keyspace hit rate (should be > 90% for cache use cases)
redis-cli INFO stats | grep keyspace
# keyspace_hits: 1000000
# keyspace_misses: 50000
# Hit rate = hits / (hits + misses) = 95.2%
# 4. Slow queries
redis-cli SLOWLOG GET 10 # Get the 10 slowest recent commands
# 5. Replication lag (for replicated setups)
redis-cli INFO replication | grep master_repl_offset
# Prometheus exporter for Redis
# docker run -d --name redis-exporter # -p 9121:9121 # oliver006/redis_exporter # --redis.addr redis://localhost:6379
Redis is one of those technologies that rewards deep knowledge. Every data structure has optimal use cases, and combining them creates solutions that would require multiple separate systems otherwise. A single Redis instance can simultaneously serve as your cache, session store, rate limiter, job queue, real-time messaging layer, and analytics engine β all with sub-millisecond latency.
ZeonEdge provides Redis architecture consulting, cluster deployment, and performance optimization. Whether you need to set up Redis for a simple caching layer or design a multi-region Redis Cluster for a high-throughput application, our infrastructure engineers have the experience to get it right. Contact our data infrastructure team for Redis architecture guidance.
Emily Watson
Technical Writer and Developer Advocate who simplifies complex technology for everyday readers.