Overview
Pick Redis. The decision to use Memcached instead requires three conditions to all be true: the workload is pure string-key LRU cache with no structures, the team has existing Memcached operational expertise, and memory efficiency at extreme scale (multi-TB working set across hundreds of nodes) is the hard constraint. For every other workload, Redis is the correct default. Redis ships hashes, lists, sorted sets, streams, Lua scripting, persistence, pub/sub, and Cluster mode; Memcached ships a single slab allocator and a dead-simple protocol. See index for the surrounding backend rule set.
When Redis wins
Redis is the right pick for the vast majority of caching and real-time data workloads.
- Sorted sets enable leaderboards, priority queues, and time-series range queries in O(log N) without a secondary database.
- Hashes store objects without serialization round-trips; field-level
HSET/HGETavoids deserializing an entire JSON blob to update one key. - Streams (
XADD,XREAD,XGROUP) provide a persistent, consumer-group-aware event log that replaces lightweight queues. - Pub/sub lets services broadcast events without a message broker; Redis Pub/Sub is not durable, but Redis Streams covers the durable case.
- Persistence options: RDB snapshots for disaster recovery, AOF for replay, or both. Memcached loses every key on restart.
- Lua scripting: multi-step operations execute atomically server-side without round trips.
SETNX/SET ... NX EXprovides a distributed lock primitive; many teams implement distributed rate limiting and idempotency keys on top of it.- Redis Cluster partitions keys across nodes automatically; Memcached’s consistent hashing is client-side and varies by library.
When Memcached wins
Memcached is the right pick in a narrow slice.
- Pure string-value LRU cache where the only operations are
get,set,delete, andcas. No lists, no sorted sets, no TTL-on-field-level writes. - Memory is the binding constraint at multi-TB scale: Memcached’s slab allocator has lower per-key overhead than Redis and wastes less RAM at certain object-size distributions.
- The team operates an existing Memcached fleet and migration cost exceeds the benefit. Keep it until a new feature requires Redis semantics.
- Multi-threaded write path: Memcached is multi-threaded by default; Redis uses a single event loop (multi-threaded I/O in Redis 6+, but the core is single-threaded). At very high write concurrency on many-core machines, Memcached can edge out Redis on raw throughput.
Trade-offs at a glance
| Dimension | Redis | Memcached |
|---|---|---|
| Data structures | Strings, hashes, lists, sets, sorted sets, streams | Strings only |
| Persistence | RDB + AOF | None; in-memory only |
| Pub/sub | Built-in | Not supported |
| Distributed locking | SET NX EX pattern | cas only; no built-in lock |
| Clustering | Redis Cluster (server-side) | Client-side consistent hashing |
| Multi-threading | I/O threads; single command loop | Fully multi-threaded |
| Memory efficiency | Higher per-key overhead | Lower per-key; slab allocator |
| Lua scripting | Yes | No |
| Replication | Primary-replica with sentinel | Third-party tools only |
| Managed cloud | ElastiCache, Upstash, Redis Cloud | ElastiCache |
| License | RSALv2 / SSPL (Redis 7.4+) | BSD |
Migration cost
Memcached to Redis is common and well-supported; the reverse is rare.
- Memcached to Redis: the protocol is incompatible, so the application layer must swap the client library (e.g.,
pylibmctoredis-py,node-memcachedtoioredis). Since Memcached is cache-only, there is no persistent data to migrate; the cache warms naturally after the cutover. Plan one engineer-day per service that reads the cache, plus a week of soak testing. - Redis to Memcached: justified only by memory cost at scale. Flatten all data structures to serialized strings, remove any Lua scripts, strip persistence, and replace pub/sub consumers. Plan two to four engineer-weeks plus potential data model changes.
Recommendation
- New service needing a cache: Redis. The operational surface is the same as Memcached and you avoid a future migration when you need sorted sets or streams.
- Session storage: Redis with TTL per key.
SETEX session:<id> 3600 <payload>is the canonical pattern. - Rate limiting and idempotency keys: Redis
INCR/EXPIREorSET NX PX. See index for patterns. - Real-time leaderboard or feed ranking: Redis sorted set (
ZADD,ZREVRANGE). - Lightweight event queue before adding Kafka: Redis Streams. See dynamodb-vs-postgres for when to add a dedicated store.
- Existing Memcached cluster under 100 GB working set, no new features planned: keep it until the next service migration.