zanith

Migrate · 01 — The system

Migrations the way the rest of the ecosystem isn't shipping them yet.

Risk-classified into six levels — safe, low, medium, high, destructive, blocked — and gated against a numeric budget you set in CI. Verified on a shadow database before up ever touches production. Every destructive op produces a restorable artifact. Every step gets audited. The four other tools we checked do at most two of those.

schema diffrisk scoreshadow DBaudit per stepif destructiverestorable until cleanupgraphthe runtime ASTdiffstructural deltaplanrisk-scored opsverifyshadow-DB applyapplyaudited per steprecoverrestore artifact
coverage by tool
toolgraphdiffplanverifyapplyrecover
Sequelize CLI
Drizzle Kit
Prisma Migrate
Atlas (Ariga)
Zanith

Atlas comes closest. It still doesn't score risk per op or ship a recovery layer for destructive ops. Soft-drop + archive recovery is, as far as we can tell, unique to Zanith.

5 stages · 1 recovery branchshadow-verified upper-step audit6 risk levelsdown by N stepsshipped · v0.2

02 — What the others ship

The dimensions that matter at 3am.

Comparing the four migration tools a production team is most likely to evaluate, against the questions you ask the moment a deploy goes sideways. Names and versions checked April 2026.

CapabilityPrisma MigrateDrizzle KitAtlas (Ariga)Sequelize CLIZanith
Drift detection
spot live-DB drift before plan
partialnoyesnoyes
Auto-fill from drift
generate ops without writing SQL
yesyesyesnoyes
Risk score per op
numeric, gateable in CI
nonononoyes
Shadow-DB verify
apply on a parallel DB before prod
yesnoyesnoyes
Soft-drop / archive recovery
destructive ops are restorable
nonononoyes
Per-step audit row
every op tracked in DB tables
partialpartialyespartialyes
Bundled web UI
browse rows + apply migrations
partialnononoyes
Down-migrations by N
step back without manual SQL
yespartialyesyesyes

Partialmeans the capability exists in some form but lacks the gate or integration. Atlas comes closest — it has shadow-DB and structured audit, but it doesn't score risk per op or ship a recovery layer for destructive ops. Soft-drop + archive recovery is, as far as we can tell, unique to Zanith.

03 — The lifecycle

Four commands. One straight line.

Every migration goes through the same four stages, in the same order, every time. The output below is what you actually see — not a stylized diagram.

01
generate

Scaffold from drift

Auto-fill ops by diffing the declared schema against the live database. No hand-written DDL.

SHELLzanith migrate generate
$ zanith migrate generate add_last_login --from-diff
wrote migrations/20260425_113022_add_last_login.ts
4 ops detected · worst level: destructive (drop_column)
writesmigrations/20260425_113022_add_last_login.ts
02
plan

Risk-summarized review

Print the plan as a table. Risk per op, destructive flag, and which step blocks apply if any.

SHELLzanith migrate plan
$ zanith migrate plan
 
Migration plan 1 pending migration:
 
Worst risk: destructive
Total risk score: 135
By level: safe=2, medium=1, destructive=1
 
step op level score
---- --------------------------------- ------------ -----
01 addColumn (users.last_login) safe 5
02 addIndex (users.last_login) low 15
03 dropColumn (users.legacy_uuid) destructive 80
04 backfill (users.last_login) medium 35
 
blocked: 1 destructive op. pass --allow-destructive to apply.
writesstdout (CI-greppable)
03
verify

Apply on a shadow DB first

Spin up a parallel database, run the migration end-to-end, diff the result against the declared schema. Up refuses without this gate.

SHELLzanith migrate verify
$ zanith migrate verify
shadow database created (zanith_shadow_29afc3)
applied 4 ops in 312ms
schema matches declaration · 0 drift
shadow database dropped
 
next: zanith migrate up --shadow-verified --allow-destructive
writesshadow-verified token (15 min)
04
up

Apply, audited per step

Walk the plan against production. Every op gets a row in `_zanith_migration_steps` with status, risk, error, and snapshot pointer.

SHELLzanith migrate up
$ zanith migrate up --shadow-verified --allow-destructive
20260425_113022_add_last_login
step 01 · addColumn · 84ms · safe (5)
step 02 · addIndex · 612ms · low (15)
step 03 · dropColumn (soft-drop) · 41ms · destructive (80)
artifact: soft_drop_column users.legacy_uuid
step 04 · backfill · 178ms · medium (35)
1 migration applied · 4 steps · 915ms total
writes_zanith_migrations + _zanith_migration_steps

04 — Risk model

Six named levels. One numeric score.

The planner classifies every op into one of six levels — safe through blocked— each mapped to a number on a 0–100 scale. The CLI prints both, CI grep's the score, and --max-risk N refuses to apply anything above your budget.

risk meter · 0 → 100
5
safe
15
low
35
medium
60
high
80
destructive
95
blocked
0
100
5
safe
CREATE TABLE (new) · ADD COLUMN nullable · ADD COLUMN with default · CREATE EXTENSION
15
low
CREATE INDEX CONCURRENTLY · ADD column generated
35
medium
ADD UNIQUE constraint · ADD FOREIGN KEY · RLS enable / disable · RENAME table or column
60
high
ADD COLUMN NOT NULL without default · ALTER COLUMN type-change · BACKFILL with batched UPDATE · Required NOT NULL on existing column
80
destructive
DROP COLUMN with data · DROP TABLE with rows · DROP UNIQUE on referenced column · Type narrowing
95
blocked
raw SQL with side effects · rebuilds_table without --allow-destructive

The flags that gate apply

Defaults are conservative — production-hostile operations require an explicit flag. No implicit consent.

  • --max-risk <n>Abort if any op scores > n. The primary CI gate. CI sets a budget; planner refuses to cross it.
  • --allow-destructiveRequired for any op at level destructive (80) or blocked (95). No flag, no apply.
  • --shadow-verifiedConfirms `migrate verify` was run. Up refuses without it (skip with --skip-shadow-check).
  • --dry-runPrint the plan as SQL with risk and audit columns. Touches nothing in production.

CI gate, in practice

The CLI exits non-zero when the plan exceeds the risk budget. Your pipeline doesn't need to grep — it inherits the exit code.

SHELLci · plan with --max-risk
# CI: migration must score ≤ 35 (medium)
$ zanith migrate plan --max-risk 35
 
Migration plan 1 pending migration:
 
Worst risk: destructive
Total risk score: 100
By level: safe=2, medium=1, destructive=1
 
step op level score
---- --------------------------------- ------------ -----
01 addColumn (users.last_login) safe 5
02 addIndex (users.last_login) low 15
03 dropColumn (users.legacy_uuid) destructive 80
04 backfill (users.last_login) medium 35
 
blocked: op 03 exceeds --max-risk 35 (destructive=80)
exit 1

05 — Recovery

A drop is not a delete. Yet.

Every destructive operation produces an artifact. The column or table doesn't vanish — it gets soft-dropped or archived, and stays restorable until you run cleanup. None of Prisma, Drizzle, Atlas, or Sequelize ship this.

soft-drop column · state transitionusers.legacy_uuid
t0 · before drop
users
  • id
  • email
  • legacy_uuid
t1 · soft-dropped (artifact created)
users
  • id
  • email
  • _zanith_dropped_legacy_uuid_a4c(renamed)
t2 · restore-column
users
  • id
  • email
  • legacy_uuid
drop · destructive (80) · artifact id a4crestore · 1 command · 0 data losspurge after cleanup --older-than 30d
soft_drop_columnrename only

Soft-drop column

When: Default for DROP COLUMN at level destructive (80). Cheap, instant.

How: Rename in place: users.legacy_uuid → users._zanith_dropped_legacy_uuid_<id>

zanith recover restore-column users.legacy_uuid
archive_columncopy + drop

Archive column

When: Used when the column has constraints that block a rename, or you opt-in.

How: Copy column values into _zanith_archive.<table>__<col> with a PK column for restore-by-join.

zanith recover restore-column orders.discount_code --pk id
soft_drop_tablerename only

Soft-drop table

When: Default for DROP TABLE. Renames the whole relation.

How: ALTER TABLE audit_log_2024 RENAME TO _zanith_dropped_audit_log_2024_<id>

zanith recover restore-table audit_log_2024
archive_tableschema move

Archive table

When: Cross-schema move. Frees the original name immediately.

How: ALTER TABLE sessions SET SCHEMA _zanith_archive

zanith recover restore-table sessions

The recovery CLI

Five commands. No new mental model. The same zanith binary that applied the migration knows how to walk it back.

SHELLzanith recover
# Inspect what's recoverable
$ zanith recover list
table column kind age size
----------------- ------------- -------------------- ------ -------
users legacy_uuid soft_drop_column 2h
orders discount_code archive_column 14h 8.4 MB
audit_log_2024 soft_drop_table 3d
 
# Look at one before you act
$ zanith recover inspect users.legacy_uuid
 
# Pull archive rows out before purge
$ zanith recover export orders.discount_code --format jsonl --out backup.jsonl
 
# Restore (one command)
$ zanith recover restore-column users.legacy_uuid
renamed _zanith_dropped_legacy_uuid_a4c legacy_uuid
users.legacy_uuid restored
 
# Sweep old artifacts when you're certain
$ zanith cleanup --older-than 30d