Documentation Index
Fetch the complete documentation index at: https://docs.dbdock.xyz/llms.txt
Use this file to discover all available pages before exploring further.
--dry-run runs the full migration into a temporary schema (or collection prefix) so you can inspect the result before touching production. Nothing is written to your real target.
Basic usage
What changes in dry-run mode
MongoDB → PostgreSQL
A dry run creates a schema nameddbdock_dryrun_<timestamp> and migrates into tables there:
PostgreSQL → MongoDB
A dry run creates collections prefixed withdbdock_dryrun_<timestamp>_:
What to check in a dry run
Row counts
Verify every document/row made it:
SELECT count(*) FROM dryrun.users; vs. the source count.Type fidelity
Spot-check sample rows. Dates, nulls, and numbers are the most common sources of trouble.
Error table
Look at
_migration_errors — any skipped rows?Query performance
Run your most important queries against the dry-run schema to confirm the indexes are right.
Size of the dry run
Dry runs migrate the full dataset by default, which is ideal for validation but slow on huge databases. To speed things up, use--batch-size and run against a reduced source if possible.
DBdock doesn’t have a --sample flag yet — open an issue on GitHub if you need one.
Production checklist after a successful dry run
Before running the real migration:- ✅ Row counts match
- ✅ Indexes created match expectations
- ✅ Sample queries return correct results
- ✅ Error table is empty (or errors are acceptable)
- ✅ Target database has enough disk space
- ✅ Team is informed (migrations can lock tables briefly)
- ✅ You have a rollback plan (usually: drop the target schema)
Cleaning up dry-run artifacts
DBdock doesn’t auto-drop dry-run schemas so you can inspect them. Drop them manually once done to reclaim space.See also
dbdock migrate
Run the real migration.
Incremental
Pull only new/changed rows.

