Overview

Migrations ship schema changes to production. Treat them as code: idempotent, append-only, version-controlled with the app, applied by CI. This page covers the contract, the zero-downtime patterns, and the framework integrations for prisma and the rest of the stack.

Write idempotent, up-only migrations

Forward scripts only. Once a migration lands in main, it is immutable; fixes go in a new migration.

  • Use CREATE TABLE IF NOT EXISTS, ADD COLUMN IF NOT EXISTS, and DROP INDEX IF EXISTS. A re-run on a partially applied database must be a no-op.
  • Skip down scripts in production. Rollbacks are a forward migration that restores the old shape, not a down invoked under stress.
  • Name files with a sortable timestamp prefix: 20260514_120000_add_orders_idempotency_key.sql. Application order must be deterministic across machines.
  • Edit a shipped migration only to fix a syntax error nobody has run yet. If any environment has applied it, write a follow-up.

Ship migrations with the app

Migrations live in the same repo and the same PR as the code that depends on them. A schema change without the code is dead weight; code without the schema crashes on boot.

  • Run prisma migrate deploy, alembic upgrade head, or sqlx migrate run in CI as a pre-deploy step, before the new binary takes traffic.
  • Block deploys when the database is on a newer version than the artifact. Mismatched schema and code is the first symptom of a bad rollback.

Use the four-step rename for zero-downtime changes

A rename or a type change in place locks the table and breaks old app instances mid-deploy. Split it across deploys.

  1. Add the new column. Nullable, no default that touches every row. ALTER TABLE users ADD COLUMN email_address TEXT;
  2. Dual-write. Application code writes both email and email_address on every update. Ship.
  3. Backfill. A separate batched job copies email into email_address for old rows. Verify with a row-count check.
  4. Drop the old column once readers are gone. ALTER TABLE users DROP COLUMN email; Ship.

The same four steps apply to changing a column type, splitting a column, or replacing a foreign-key target.

Never wrap a huge migration in a single transaction

Postgres holds an ACCESS EXCLUSIVE lock for the duration of a transactional DDL. On a 100M-row table that blocks every reader and writer until the statement finishes. Partition huge tables first when you can, so DDL only touches one slice at a time.

  • Use CREATE INDEX CONCURRENTLY for hot tables (see postgres-indexes). It cannot run inside a transaction; most tools have a per-migration flag (-- prisma-migrate-disable-tx or Alembic’s op.execute outside a transaction context).
  • Add NOT NULL in two steps: add the column nullable, backfill, then ALTER COLUMN ... SET NOT NULL once the data is clean.
  • Split heavy DDL across multiple migrations so one bad statement does not block the queue.

Run backfills as separate jobs

A backfill that scans every row does not belong inside the migration. Migrations should finish in seconds; jobs can run for hours.

  • The migration adds the column and the index. A separate worker iterates in batches of 1k-10k rows, sleeps between batches, and is resumable.
  • Track progress in a backfill_state table. A backfill that cannot resume cannot be retried safely.
  • Verify completeness with SELECT count(*) WHERE new_column IS NULL before the cleanup migration runs.

Pick the right framework integration

The contract is the same; the commands differ.

  • Prisma: prisma migrate dev locally to author, prisma migrate deploy in CI. See prisma.
  • Python: Alembic. alembic revision --autogenerate -m "...", then hand-edit. Autogenerate misses indexes and CHECK constraints.
  • Rust: sqlx-cli with sqlx migrate add and sqlx migrate run. SQL files are the source of truth.
  • SQLite: same contract, but ALTER TABLE is limited; use the sqlite copy-and-rename pattern for type changes.

See general-principles for the broader rule on small, reversible deploys.