Audit + history
Four bookkeeping tables in your live database. Every migration run, every step, every snapshot is queryable like data.
The four tables
| Table | Rows hold |
|---|---|
_zanith_migrations | Applied migration ids, with appliedAt, appliedBy, total elapsed time, outcome (ok / failed / partial). |
_zanith_migration_steps | One row per op in each migration: kind, risk level, started-at, elapsed-ms, outcome, error message if any. |
_zanith_schema_snapshots | Serialized schema graph at each migration boundary. phase is before / after / baseline. |
_zanith_artifacts | Soft-drops, archives, backfill checkpoints — see [Recovery](/docs/migrate/recover). |
All four are normal Postgres tables. Query them, alert on them, ship them to your warehouse. The schema is stable across engine versions.
The history CLI
SHELL
8 lineszanith migrate history # most-recent-first list of applied migrationszanith migrate history --verbose # include per-step outcomes + risk zanith migrate history --since 7d # last weekzanith migrate history --json # machine-readable zanith migrate history --id 20260504_120000_add_orders_index# → full detail for one migration: every step, snapshot diff, artifacts createdSnapshot replay
Each successful migration writes a serialized graph snapshot. That gives you time-travel for the schema — load a past snapshot, diff it against current, see what changed.
TS
13 linesimport { findSnapshot, deserializeSchemaGraph, diffSchemas,} from 'zanith'; // Schema graph as it was after a specific migration.const row = await findSnapshot(adapter, '20260420_120000_drop_orders_total');const past = deserializeSchemaGraph(row!.serialized); // Diff against the current declared graph.const drift = diffSchemas(past, currentGraph);console.log(`${drift.length} structural changes since that migration`);Querying like data
SQL
19 lines-- Migrations slower than 30 seconds, in the last week.SELECT id, applied_at, elapsed_ms / 1000.0 AS secondsFROM _zanith_migrationsWHERE applied_at > now() - interval '7 days' AND elapsed_ms > 30000ORDER BY elapsed_ms DESC; -- Failed steps with their error messages.SELECT migration_id, step_index, op_kind, errorFROM _zanith_migration_stepsWHERE outcome = 'failed'ORDER BY started_at DESCLIMIT 50; -- Total destructive ops applied this month.SELECT count(*)FROM _zanith_migration_stepsWHERE risk_level >= 4 AND started_at >= date_trunc('month', now());Programmatic reads
TS
4 linesimport { listAppliedMigrations, listMigrationSteps } from 'zanith'; const all = await listAppliedMigrations(adapter);const steps = await listMigrationSteps(adapter, all[0].id);Same shapes as the SQL above, plus type safety. See programmatic API for the full surface.