zanith

Why Zanith · 01 — The thesis

Your schema is the runtime.

Every major ORM today generates code from your schema. Zanith doesn't. Your schema is parsed once at runtime into a graph — queries, types, and validation flow from that graph directly. No generate. No rebuild.

Today, two adapters ship — pg and postgres.js. MySQL and SQLite are on the roadmap. The architecture is database-agnostic; the adapter set is what scales.

Engine pulselive · idle

Models

1,000

In flight

47

Today

1.2M

Runtime pipeline

parser
graph
compiler
adapter
22.9ms·0.73ms·2.4µs·<5µsv0.1 ready

Runtime graph

schema.zanith

1,000 models · loaded 14 Mar 2026

active

Source-anchored model · User

ZANITHschema.zanith
model User {
id Int @id @default(autoincrement())
email String @unique
createdAt DateTime @default(now())
}

Recent compiled queries

  • nowSELECT id, email FROM users WHERE …1.2µs
  • 1sSELECT … FROM posts JOIN users …17.2µs
  • 2sINSERT INTO sessions VALUES …2.8µs
  • 3sUPDATE users SET role = $1 WHERE …2.1µs
Adapters2 shipped · 2 planned
pg · shippedpostgres.js · shippedmysql · plannedsqlite · planned

02 — The cost

Every codegen ORM is fast at zero models. None of them is fast at a thousand.

The Day-1 experience is great. Pick an ORM, sketch a few models, run generate, ship. The trouble is what compounds — quietly, predictably, along four axes.

Cost vs. schema sizerelative · log-ish

Legend

  • generated client size
  • TypeScript compile
  • hot-reload latency
  • deploy block
  • zanith

Curves are illustrative — they encode the *shape* of the cost, not sourced numbers. Specific competitor benchmarks appear once we have published comparisons to cite.

  • Generated client size

    01

    grows linearly with schema. Each model adds a class, methods, relation accessors, and types.

  • TypeScript compilation

    02

    grows with the generated client. Above a few hundred models the compiler starts to feel it; above a thousand it crashes for some teams.

  • Hot-reload latency

    03

    grows with watch-time regeneration. The faster you iterate on the schema, the more the regeneration step blocks you.

  • Deploy pipeline

    04

    blocks on the generation step. Every deploy waits for a fresh client to be produced, regardless of whether the schema actually changed.

The cost compounds. The deeper you go, the more the schema-grow architecture becomes the application's biggest liability.

03 — The pattern

It isn't a bug in any one tool. It's the shape of the architecture.

Prisma is well-engineered. Drizzle is well-engineered. TypeORM has shipped for years. The issue isn't the quality of any one tool — it's that they all pick the same architectural pattern, and that pattern has a fixed cost.

Path A · codegen ORMbuild-time + runtime
schema

source of truth

build step

minutes at scale

wait
generated code

committed

import

at startup

runtime

Five stations. Step 5 is where everything compounds — every schema change re-runs the whole sequence.

Path B · zanithruntime only
schema

source of truth

parse

22.9ms

graph

0.73ms / lookup

runtime

2.4µs / query

Four stations. None of them is a build step. The schema change is the deploy.

Every line of the cost in the previous chapter follows from the codegen pattern. The generated client grows because the pattern requires it. The compiler slows because the pattern produces more for it to compile. The deploy pipeline blocks because the pattern has a build step that must run before runtime can start.

You cannot fix this by writing the codegen better. The cost is in the shape, not the implementation.

§04

The lock · why the patches don't fix it

Four patches. Each one moves the cost; none removes it.

Codegen ORMs have tried, and the patches are well-known. They are sensible engineering. None of them removes the architectural cost; each one moves it somewhere else.

Tried · patch 01failed

Incremental generation

regenerate only the changed models

build
runtime
cost moves

saves time per change but doesn't shrink the generated client; the runtime cost stays linear in schema size.

Tried · patch 02failed

Lazy / on-demand generation

skip generation locally; run only at deploy

local
ci
cost moves

defers the cost rather than removing it; CI now blocks where development used to.

Tried · patch 03failed

Smarter caching

cache the generated artifacts across runs

first run
n+1
cost moves

helps single-developer iteration; doesn't help the team or the deploy pipeline.

Tried · patch 04failed

Splitting the schema

break a 1000-model schema into ten 100-model schemas

1 pipeline
n pipelines
cost moves

each schema is faster, but now you maintain ten generation pipelines and lose cross-schema type safety.

The only fix is to remove the generation step entirely. Anything else is a tax that gets paid in a different currency.

05 — The shift

Schema as runtime data.

Zanith reads the schema at startup, parses it once into a graph in memory, and uses that graph directly. Queries compile to parameterized SQL on the way out. Types come from TypeScript inference over the schema, not from a generated .d.ts file.

Four layers, joined on one runtime substrate. None of them runs at build time.

Runtime substrate · createZanith()
4 layers · joined in-process

PARSER

layer 01

reads .zanith files at app start

  • · lexer · token stream
  • · parser · CST → AST
  • · validator · invariants
22.9ms / 1000 models

GRAPH

layer 02

the runtime structure, in memory

  • · models · fields · enums
  • · relations · indexes · uniqueness
  • · type inference projection
0.73ms / 1000 lookups · 3.4MB

COMPILER

layer 03

AST → parameterized SQL on each query

  • · expression tree · where clauses
  • · join planner · projection
  • · parameter binding ($1, $2, …)
2.4µs / SELECT · 17.2µs / JOIN

ADAPTER

layer 04

pluggable wire driver

  • · shipped: pg · postgres.js
  • · planned: mysql · sqlite
  • · 5-method interface · drop-in
<5µs engine-side
parser · graph · compiler · adapterno build step · schema = runtime

What it looks like in code

A typed call on the model API, compiled to parameterized SQL. Inspectable, predictable, no magic on the wire.

example.ts → users.findMany
TSexample.ts
const users = await db.user.findMany({
where: { email: { contains: '@example.com' } },
orderBy: { createdAt: 'desc' },
take: 10,
});
typed against schema graphready
SQLcompiled.sql
SELECT id, email, name, created_at
FROM users
WHERE email ILIKE $1
ORDER BY created_at DESC
LIMIT 10;
parameters bound · never interpolatedready

On mobile the two blocks stack vertically; thebetween them is the compiler.

06 — What changesat 1000 models

What this looks like at a thousand models.

M01
22.9ms
schema compile
1000 models
src · scale.test.ts
M02
0.73ms
model lookups
1000 lookups
src · scale.test.ts
M03
3.4MB
graph footprint
1000 models
src · scale.test.ts
M04
88.97KB
ESM bundle
engine/dist/index.js
src · tsup

The schema-change scenario

Edit. Restart. Done.

Frame 01 · editorT+0s

Edit schema.zanith

ZNschema.zanith
users.ts
model User {
id Int @id @default(autoincrement())
email String @unique
+ phone String? @unique
name String?
}
Ln 4 · zanith DSL+1 · saved
Frame 02 · terminalT+22ms

App reparse

shell · zsh
~/zanith
$ pnpm dev
[zanith] schema.zanith — 4 fields → 5 fields
[zanith] graph rebuild · 22.9ms
[zanith] runtime ready · 1000 models
$
4 lines · 22.9ms totalready
Frame 03 · resultT+25ms

New field live

TSquery.ts
await db.user.findMany({
where: { phone: { startsWith: '+1' } },
});
compiles to2.4µs
SQLcompiled · parameters bound
SELECT id, email, phone FROM users
WHERE phone LIKE $1
1 query · 2 lines SQLexecuted

No generate. No regenerated client to commit. No watch process to fight. Twenty-five milliseconds, end-to-end.

Memory · 1000 models

3.4MB. Roughly the size of one user-uploaded photo.

The graph holds models, fields, enums, relations, indexes, and uniqueness constraints — all of it, in memory, for the lifetime of the process. A 500-model generated Prisma client commonly runs to tens of megabytes on disk before it's loaded; the runtime graph is an order of magnitude smaller.

Each cell ≈ 10KB · 340 of 1000 cells filled

The schema change is the deploy.

07 — The receiptslive in the test suite

The numbers above live in the test suite. Here is what running it actually prints.

vitest · engine/testv0.34.6 · run #1247
$ pnpm test
 
vitest run
 
✓ test/compiler/select.test.ts (8)
✓ test/compiler/insert.test.ts (5)
✓ test/expression/expr.test.ts (12)
✓ test/integration/pipeline.test.ts (6)
✓ test/edge-cases/negative.test.ts (22)
✓ test/types/end-to-end.test.ts (13)
✓ test/benchmark/scale.test.ts (6) 427ms
⚠ test/benchmark/execution.test.ts (1) 466ms
❯ pipeline-stages: ratio assertion (flaky)
 
Test Files 19 passed (19)
Tests 191 passed | 1 failed (192)
Duration 1.03s
 
[disclosure] the one failure is a benchmark
ratio asserting against a near-zero baseline
→ flaky run-to-run, not a real regression
committed at every push · ci-green policy on main191/192 · disclosed honestly

benchmark file

schema scale

scale.test.ts

  • schema compile, 22.9ms
  • lookups, 0.73ms
  • memory, 3.4MB

benchmark file

per-op cost

execution.test.ts

  • SELECT compile, 2.4µs
  • JOIN compile, 17.2µs
  • insert, 2.8µs

benchmark file

integration

pipeline.test.ts

  • end-to-end build + compile
  • transaction rollback paths

The full breakdown — every figure, with comparison anchors, source files, and honest caveats — lives on the proof page.

Open the proof page