Incremental generation
regenerate only the changed models
saves time per change but doesn't shrink the generated client; the runtime cost stays linear in schema size.
Why Zanith · 01 — The thesis
Every major ORM today generates code from your schema. Zanith doesn't. Your schema is parsed once at runtime into a graph — queries, types, and validation flow from that graph directly. No generate. No rebuild.
Today, two adapters ship — pg and postgres.js. MySQL and SQLite are on the roadmap. The architecture is database-agnostic; the adapter set is what scales.
Models
1,000
In flight
47
Today
1.2M
Runtime pipeline
Runtime graph
schema.zanith
1,000 models · loaded 14 Mar 2026
Source-anchored model · User
model User { id Int @id @default(autoincrement()) email String @unique createdAt DateTime @default(now())}Recent compiled queries
02 — The cost
The Day-1 experience is great. Pick an ORM, sketch a few models, run generate, ship. The trouble is what compounds — quietly, predictably, along four axes.
Legend
Curves are illustrative — they encode the *shape* of the cost, not sourced numbers. Specific competitor benchmarks appear once we have published comparisons to cite.
Generated client size
01grows linearly with schema. Each model adds a class, methods, relation accessors, and types.
TypeScript compilation
02grows with the generated client. Above a few hundred models the compiler starts to feel it; above a thousand it crashes for some teams.
Hot-reload latency
03grows with watch-time regeneration. The faster you iterate on the schema, the more the regeneration step blocks you.
Deploy pipeline
04blocks on the generation step. Every deploy waits for a fresh client to be produced, regardless of whether the schema actually changed.
The cost compounds. The deeper you go, the more the schema-grow architecture becomes the application's biggest liability.
03 — The pattern
Prisma is well-engineered. Drizzle is well-engineered. TypeORM has shipped for years. The issue isn't the quality of any one tool — it's that they all pick the same architectural pattern, and that pattern has a fixed cost.
source of truth
minutes at scale
waitcommitted
at startup
Five stations. Step 5 is where everything compounds — every schema change re-runs the whole sequence.
source of truth
22.9ms
0.73ms / lookup
2.4µs / query
Four stations. None of them is a build step. The schema change is the deploy.
Every line of the cost in the previous chapter follows from the codegen pattern. The generated client grows because the pattern requires it. The compiler slows because the pattern produces more for it to compile. The deploy pipeline blocks because the pattern has a build step that must run before runtime can start.
You cannot fix this by writing the codegen better. The cost is in the shape, not the implementation.
§04
The lock · why the patches don't fix it
Codegen ORMs have tried, and the patches are well-known. They are sensible engineering. None of them removes the architectural cost; each one moves it somewhere else.
regenerate only the changed models
saves time per change but doesn't shrink the generated client; the runtime cost stays linear in schema size.
skip generation locally; run only at deploy
defers the cost rather than removing it; CI now blocks where development used to.
cache the generated artifacts across runs
helps single-developer iteration; doesn't help the team or the deploy pipeline.
break a 1000-model schema into ten 100-model schemas
each schema is faster, but now you maintain ten generation pipelines and lose cross-schema type safety.
The only fix is to remove the generation step entirely. Anything else is a tax that gets paid in a different currency.
05 — The shift
Zanith reads the schema at startup, parses it once into a graph in memory, and uses that graph directly. Queries compile to parameterized SQL on the way out. Types come from TypeScript inference over the schema, not from a generated .d.ts file.
Four layers, joined on one runtime substrate. None of them runs at build time.
PARSER
layer 01
reads .zanith files at app start
GRAPH
layer 02
the runtime structure, in memory
COMPILER
layer 03
AST → parameterized SQL on each query
ADAPTER
layer 04
pluggable wire driver
What it looks like in code
A typed call on the model API, compiled to parameterized SQL. Inspectable, predictable, no magic on the wire.
const users = await db.user.findMany({ where: { email: { contains: '@example.com' } }, orderBy: { createdAt: 'desc' }, take: 10,});SELECT id, email, name, created_atFROM usersWHERE email ILIKE $1ORDER BY created_at DESCLIMIT 10;On mobile the two blocks stack vertically; the → between them is the compiler.
The schema-change scenario
Edit schema.zanith
model User { id Int @id @default(autoincrement()) email String @unique+ phone String? @unique name String?}App reparse
New field live
No generate. No regenerated client to commit. No watch process to fight. Twenty-five milliseconds, end-to-end.
Memory · 1000 models
The graph holds models, fields, enums, relations, indexes, and uniqueness constraints — all of it, in memory, for the lifetime of the process. A 500-model generated Prisma client commonly runs to tens of megabytes on disk before it's loaded; the runtime graph is an order of magnitude smaller.
Each cell ≈ 10KB · 340 of 1000 cells filled
The schema change is the deploy.
$ pnpm testvitest run✓ test/compiler/select.test.ts (8)✓ test/compiler/insert.test.ts (5)✓ test/expression/expr.test.ts (12)✓ test/integration/pipeline.test.ts (6)✓ test/edge-cases/negative.test.ts (22)✓ test/types/end-to-end.test.ts (13)✓ test/benchmark/scale.test.ts (6) 427ms⚠ test/benchmark/execution.test.ts (1) 466ms❯ pipeline-stages: ratio assertion (flaky)Test Files 19 passed (19)Tests 191 passed | 1 failed (192)Duration 1.03s[disclosure] the one failure is a benchmarkratio asserting against a near-zero baseline→ flaky run-to-run, not a real regression
benchmark file
schema scalescale.test.ts
benchmark file
per-op costexecution.test.ts
benchmark file
integrationpipeline.test.ts
The full breakdown — every figure, with comparison anchors, source files, and honest caveats — lives on the proof page.
Open the proof pageEach one shows you what's behind it before you commit to walking through. No newsletter. No waitlist. No demo call. Real destinations, in the order someone usually wants them.
From the schema DSL to the typed query API, end-to-end. Reference, not narrative.
import { createZanith } from 'zanith';import { PgAdapter } from 'zanith/adapters/pg'; const db = await createZanith({ schema: './schema.zanith', adapter: new PgAdapter({ connectionString }),});The engine is small enough to read in an afternoon. Parser, graph, compiler, adapter — that is everything.
The full benchmark suite, with comparison anchors and honest caveats.
Zanith is at v0.1.0. The engine is real, the tests pass, the SQL is parameterized. Several pieces a production team would expect aren't in yet — those are listed plainly on the roadmap.
end · 08 / 08