Overview
Use Postgres by default. DynamoDB is a specialized tool for a specific problem: single-key reads and writes at any scale with predictable single-digit-millisecond latency and zero operational overhead. The moment your access pattern requires a second lookup pattern (query by email AND query by user ID), DynamoDB requires a Global Secondary Index (GSI) and the data model becomes complex. Postgres handles dozens of query patterns naturally with indexes and joins. The rule is: default to Postgres; reach for DynamoDB only when you have proven that the access pattern is key-value, the scale is genuinely large (tens of millions of items, millions of requests per second), and you are already on AWS. See postgres for the Postgres rule set and aurora-vs-rds-postgres for the Aurora vs RDS choice within Postgres on AWS.
When Postgres wins
Postgres is the right choice for virtually every OLTP application.
- Flexible access patterns: a B-tree index on
email, a partial index onstatus = 'active', a GIN index ontags, and a full-text index ondescriptioncan all exist on the same table. DynamoDB requires modeling each access pattern as a GSI at table creation time. - Joins: a single SQL query can join users, orders, line items, and products. In DynamoDB, each relationship requires either denormalization (store the data redundantly) or multiple round-trip gets. Neither is free.
- Transactions: Postgres transactions are cheap and default. DynamoDB transactions (
TransactWriteItems) work but add latency and are limited to 100 items per request. - Ad-hoc queries: Postgres accepts any
SELECTstatement. DynamoDB requires a predefined key structure; a Scan on DynamoDB reads every item and is expensive. - Migrations and schema evolution:
ALTER TABLEplus a migration tool (see migrations). DynamoDB has no schema; evolving your data model requires dual-writes and backfills managed in application code. - Operational simplicity at small-to-medium scale: RDS or Aurora Postgres is fully managed. DynamoDB is also managed, but the data modeling complexity adds its own overhead.
When DynamoDB wins
DynamoDB is the right choice in a narrow, specific set of conditions.
- All three must be true: the primary access pattern is by a single key (user ID, session ID, device ID), the scale requires horizontal writes beyond what a single Postgres primary handles, and you are already on AWS.
- Session storage at massive scale: DynamoDB with TTL is the standard pattern for storing billions of sessions that expire automatically. Redis is cheaper for smaller scales; DynamoDB handles internet-scale without sharding concerns.
- IoT telemetry and event log: append-only writes keyed by device ID plus timestamp; no joins; high write throughput. DynamoDB’s write capacity scales without operational intervention.
- Gaming leaderboards and user state that fits a single-entity model: a game stores all player attributes in one DynamoDB item; reads and writes are single-key operations at any concurrency.
- Serverless workloads with unpredictable traffic: DynamoDB on-demand capacity scales to zero and bursts without provisioning. Aurora Serverless v2 is the Postgres alternative, but DynamoDB has a longer track record.
- AWS ecosystem deep integration: DynamoDB Streams feed Lambda, Kinesis, and EventBridge natively; the change-data-capture pattern is built in.
Trade-offs at a glance
| Dimension | Postgres | DynamoDB |
|---|---|---|
| Query flexibility | Any SQL; ad-hoc indexes | Primary key + GSIs; no ad-hoc scans |
| Joins | Native, cheap | Requires denormalization or multiple gets |
| Transactions | Full ACID; cheap | TransactWriteItems; limited to 100 items |
| Read latency | Low ms on indexed queries | Single-digit ms on key queries |
| Write throughput | Vertical scale + replicas | Horizontal; unlimited with on-demand |
| Schema | Defined; migrations required | Schemaless; no migrations |
| Access pattern evolution | Add index; alter table | Add GSI (cannot change partition key) |
| TTL / expiry | pg_cron + partial index | Built-in TTL attribute |
| Cost model | Instance + storage | Per read/write unit + storage |
| Multi-region | Aurora Global Database | DynamoDB Global Tables |
| Full-text search | tsvector + GIN index | No; use OpenSearch |
| Operational overhead | Low (managed); schema design work | Low (managed); access pattern design work |
Migration cost
Postgres-to-DynamoDB requires a data model redesign, not just a data transfer.
- Postgres to DynamoDB: you cannot lift-and-shift. Analyze every query the application runs; group them by access pattern; design partition keys and GSIs to serve each pattern. Rewrite data access code to use the AWS SDK. Plan two to four engineer-months for a non-trivial application.
- DynamoDB to Postgres: export items via DynamoDB export to S3, then import to Postgres with
COPY. The hard part is designing the relational schema from a denormalized DynamoDB structure and rewriting queries. Plan one to two engineer-months per major entity type. - Cheaper alternative to migrating: add a Postgres read replica (or Aurora read endpoint) to handle complex query patterns while keeping DynamoDB for the high-throughput key access path. Dual-write to both stores.
Recommendation
- New SaaS, B2B, or B2C product: Postgres. Every query pattern is served; migrations are straightforward; hiring is easy.
- Session storage for a high-traffic consumer app (millions of concurrent users): DynamoDB with TTL, or Redis if sub-millisecond matters and working set fits in memory.
- Existing DynamoDB table that has grown to require joins: add Postgres for the relational workload; keep DynamoDB for the key-value path. Avoid migrating the key-value hot path.
- Serverless function needing a data store with no connection pool: DynamoDB (HTTP API, no persistent connections). Postgres with a connection pooler (PgBouncer) is the alternative; see postgres.
- Multi-region active-active writes: DynamoDB Global Tables is the simplest path. Aurora Global Database is active-passive only.