Overview

Moving a Postgres database to a new host requires planning for logical consistency, downtime, and rollback. This guide uses pg_dump and pg_restore for a logical migration, covers when to prefer physical replication instead, and walks through a cutover checklist. The production configuration that follows a migration is in postgres-prod.

Prerequisites

  • pg_dump and pg_restore installed on the source host. These ship with postgresql-client.
  • Network access from the migration machine to both source and target instances.
  • A target Postgres instance at the same major version, or one major version higher. Cross-major restores usually work but test first.
  • A maintenance window or a read-replica strategy for near-zero-downtime. See postgres-replication for the replica option.
  • DATABASE_URL for both source and target, stored in environment variables.

Steps

1. Choose logical vs physical

Use pg_dump (logical) when:

  • The source and target are on different hardware, different providers, or different major versions.
  • You are migrating a single database, not the whole cluster.

Use pg_basebackup (physical) when:

  • You need a byte-for-byte replica, the same major version, and zero schema translation.
  • You are setting up streaming replication rather than a one-time move. See postgres-replication.

This guide uses logical. Logical is the safer default for provider-to-provider migrations.

2. Dump the source database

# Custom format (-Fc) is the most flexible; supports parallel restore.
pg_dump \
  --dbname="$SOURCE_DATABASE_URL" \
  --format=custom \
  --no-owner \
  --no-acl \
  --verbose \
  --file=dump.pgdump
 
# Check the file is non-empty.
ls -lh dump.pgdump

--no-owner and --no-acl drop ownership and permission clauses that reference users which may not exist on the target. Re-apply grants manually after restore if needed.

3. Estimate downtime

Logical restore is sequential by default. Estimate duration before the maintenance window.

# Check dump size.
ls -lh dump.pgdump
 
# Estimate restore time: roughly 10–20 GB per hour on a single CPU.
# Use -j to parallelize.

For large databases (over 50 GB), prefer near-zero-downtime: set up streaming replication with pg_basebackup, cut over by pointing the application at the replica, then promote. If logical is still preferred, schedule an overnight window.

4. Create the target database

# On the target instance:
createdb --username=postgres --owner=myapp myapp_prod

Or from psql:

CREATE DATABASE myapp_prod OWNER myapp;

5. Restore to the target

pg_restore \
  --dbname="$TARGET_DATABASE_URL" \
  --jobs=4 \
  --verbose \
  --no-owner \
  --no-acl \
  dump.pgdump

--jobs=4 runs restore in parallel across four workers. Set it to the number of vCPUs available on the target. Monitor restore progress with pg_stat_progress_copy on the target.

6. Verify row counts

Row counts are the fastest sanity check.

-- Run on source.
SELECT schemaname, tablename, n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
 
-- Run the same query on target and diff.

For a precise check, run SELECT COUNT(*) FROM <table> on the five largest tables on both source and target. Counts must match exactly.

7. Execute the cutover

  1. Put the application in maintenance mode or stop writes.
  2. Run one final incremental dump of any rows written since the initial dump (or use pg_logical if set up).
  3. Update DATABASE_URL in the application environment or secrets manager.
  4. Restart the application.
  5. Run a smoke test against the new database.
  6. Remove maintenance mode.

Keep the old database instance running for 24 hours after cutover as a rollback target.

Verify it worked

# 1. Row counts match between source and target (see step 6).
 
# 2. Application connects to the new host.
psql "$TARGET_DATABASE_URL" -c "SELECT current_database(), version();"
 
# 3. Indexes are present.
psql "$TARGET_DATABASE_URL" -c "\di"
 
# 4. Sequences are at the correct position (not reset to 1).
psql "$TARGET_DATABASE_URL" -c "SELECT last_value FROM <id_sequence>;"

Common errors

  • pg_restore: error: could not execute query: role "myapp" does not exist. The dump contains ownership clauses. Use --no-owner on the restore, then reassign manually.
  • Row count mismatch after restore. A write hit the source after the dump started. Use --serializable-deferrable on pg_dump for a consistent snapshot, or schedule the dump during a write freeze.
  • pg_restore fails midway due to a foreign key violation. Restore with --disable-triggers to defer constraint checks, then re-enable. Investigate if violations remain after the restore.
  • Sequences reset to 1 after restore. Explicitly call setval('<seq>', (SELECT MAX(id) FROM <table>)) after restore.
  • Connection refused on the target. The target pg_hba.conf does not allow connections from the restore machine’s IP. See set-up-postgres-locally for the pg_hba.conf pattern.