Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dbdock.xyz/llms.txt

Use this file to discover all available pages before exploring further.

Once you’ve done an initial migration, you usually want to keep the target in sync with the source as new data arrives. --incremental lets you pull only data newer than a cutoff.

Basic usage

npx dbdock migrate "$SRC" "$DST" --incremental --since "2026-04-01T00:00:00Z"
--since is the cutoff — DBdock ignores anything older than this timestamp.

How “changed” is determined

DBdock looks for timestamp fields on each source table/collection:
  • MongoDB: updated_at, createdAt, or the ObjectId embedded timestamp
  • PostgreSQL: updated_at, created_at, or any timestamptz column named *_at
If DBdock can’t find a suitable field, it falls back to insert-only mode (new rows only, updates are missed).
1

Initial full migration

npx dbdock migrate "$SRC" "$DST" --export-config ./migration.json
Record the timestamp you finished — you’ll use it as the next --since.
2

Store the last-run timestamp

echo "$(date -u +%Y-%m-%dT%H:%M:%SZ)" > .dbdock-last-sync
3

Incremental syncs

LAST=$(cat .dbdock-last-sync)
npx dbdock migrate "$SRC" "$DST" --config ./migration.json --incremental --since "$LAST"
date -u +%Y-%m-%dT%H:%M:%SZ > .dbdock-last-sync
4

Automate on a cron

Run the incremental sync hourly or daily depending on how fresh the target needs to be.

Caveats

Deletes aren’t detected

Incremental migration pulls inserts and updates. Deletes in the source are not reflected in the target. If you need full mirroring including deletes:
  • Do periodic full re-migrations (e.g., weekly full + hourly incremental)
  • Or use a dedicated CDC tool (Debezium, etc.) if the use case is live replication

Clock drift

--since uses the source database’s timestamps. If source and target clocks drift, you may miss rows or duplicate some. Prefer timestamps from the source over wall-clock times from the runner.

Primary keys must be stable

Incremental updates rely on matching rows between source and target. If IDs change or are regenerated on each sync, updates will behave as inserts and duplicates will appear.

Use cases

Ongoing MongoDB → Postgres

Gradual migration — apps still write to Mongo, analytics queries hit Postgres.

Read-replica style

Keep a Postgres reporting database fresh from the MongoDB primary.

Staging refresh

Daily pull of new prod data into a staging DB.

Backup-by-replication

Keep an out-of-region mirror updated regularly.

When NOT to use incremental

  • Schema changed in the source. Incremental mode assumes the schema is stable. For schema changes, do a full migration.
  • You need strong consistency. Incremental has a window where source ≠ target. For financial/compliance work, use transactional replication.
  • Write throughput is very high. Incremental can’t keep up past a certain write rate. Use dedicated CDC.

See also

dbdock migrate

Full migration command.

Dry runs

Validate before running.