zanith

Proof · 01 — The receipts

Real numbers. Sourced. Re-run on every commit.

Every figure on this page comes from a benchmark file in the engine repo, with the source named below it. They re-run on every commit; the page is updated when the numbers move. The disclosures at the bottom are part of the page, not buried.

measured
191/192
tests passing
measured
22.9ms
schema compile · 1k models
measured
2.4µs
SELECT compile
measured
88.97KB
ESM bundle
v0.1.0 · earlynumbers reflect engine compilation overhead — not end-to-end query timelast run · CI · main
02 — Schema-scale benchmarksscale.test.ts

How the engine behaves at a thousand models.

row · 0110 / 100 / 500 / 1k
22.9ms
schema compile · 1000 models

Lex + parse + validate the .zanith source into a runtime graph. Sub-linear at scale because validation reuses tokenization state.

src · engine/test/benchmark/scale.test.ts
row · 0210 / 100 / 500 / 1k
0.73ms
1000 model lookups

Indexed lookup against the in-memory model registry. Sub-microsecond per call — well below any human-perceptible threshold.

src · engine/test/benchmark/scale.test.ts
row · 0310 / 100 / 500 / 1k
3.4MB
graph memory · 1000 models

Total memory footprint of the runtime graph at 1k models. Order of magnitude smaller than a comparably-sized generated client on disk.

src · engine/test/benchmark/scale.test.ts
03 — Per-operation costexecution.test.ts · 10,000 iterations each

Every measured operation, all on one page.

groupoperationcost / op
expression

Expression · simple eq

{ field: value }

1.2µs
expression

Expression · AND/OR

{ AND: [a, b], OR: [c, d] }

2.4µs
compile

SELECT · with WHERE

where + projection

2.4µs
compile

findMany · build + compile

args validate + AST + emit

2.1µs
compile

JOIN · projection · WHERE

1 included relation

17.2µs
compile

GROUP BY + COUNT + SUM

aggregate compile

5.5µs
write

INSERT · single row

RETURNING *

2.8µs
write

UPSERT · ON CONFLICT DO UPDATE

deduped by unique field

2.7µs
write

Bulk INSERT · 10 rows

values list

7.6µs
9 operations · 4 groupseach = engine overhead, not query execution time
04 — The test suite191 / 192 · disclosed

191 of 192 passing. One flaky benchmark, called out plainly.

vitest · run #1247main · 19 files
$ pnpm test
vitest run · zanith engine
✓ test/compiler/select.test.ts (8)
✓ test/compiler/insert.test.ts (5)
✓ test/compiler/update.test.ts (4)
✓ test/compiler/delete.test.ts (3)
✓ test/expression/expr.test.ts (12)
✓ test/expression/where.test.ts (18)
✓ test/integration/pipeline.test.ts (6)
✓ test/integration/typed-client.test.ts (9)
✓ test/edge-cases/negative.test.ts (22)
✓ test/edge-cases/null-handling.test.ts (8)
✓ test/types/end-to-end.test.ts (13)
✓ test/types/inference.test.ts (7)
✓ test/schema/parser.test.ts (15)
✓ test/schema/validator.test.ts (11)
✓ test/schema/relations.test.ts (9)
✓ test/schema/builder.test.ts (12)
✓ test/benchmark/scale.test.ts (6) 427ms
⚠ test/benchmark/execution.test.ts (1) 466ms
❯ pipeline-stages: ratio assertion (flaky)
Test Files 19 passed (19)
Tests 191 passed | 1 failed (192)
Duration 1.03s
committed at every push · ci-green policy on maindisclosed honestly, not rounded

The 192nd test, in plain English

A ratio asserting against a near-zero baseline.

The flaky assertion compares two timings — a relational query versus a near-zero raw-string-concat baseline — and divides them. Because the baseline is so close to zero, tiny CPU jitter swings the ratio by 100x between runs. The per-op check directly above it (< 50µs) is stable and passes every time. We're reporting 191 / 192 rather than rounding to all-green; the flaky check will be rewritten to drop the ratio.

honest disclosure

05 — What this page doesn't measure

The numbers above are real. Here's what they aren't.

Per the VOICE.md disclaimer rule: every claim made on this page has an accompanying caveat about what the figure does and does not cover. Four gaps, listed plainly so the credibility is shaped by the absence too.

  • Gap · 01scope · by design

    End-to-end query time against a live database

    Every µs on this page is engine overhead. Network latency, connection setup, and the database's actual execution time are excluded. If you want end-to-end numbers, the local benchmark setup in §06 will produce them.

  • Gap · 02in progress

    Comparable benchmarks against Prisma, Drizzle, TypeORM

    We don't have sourced competitor figures yet. Until we do, the marketing copy uses one consistent hedge: "codegen ORMs at this scale spend minutes regenerating." Specific µs comparisons will land when we can run a matched workload on each.

  • Gap · 03planned

    Long-running memory churn

    The 3.4MB figure is the steady-state graph footprint. We haven't yet measured what happens to memory under continuous schema reloads, sustained query throughput, or under-leak-detection runs over hours.

  • Gap · 04downstream concern

    Performance on hot-paths inside the DB

    The compiler's choice of joins, projections, and parameter shapes affects how the database plans the query. We measure what we emit, not how the planner reacts. The /examples page shows the SQL we generate; production tuning lives downstream.

Ultimate Trust

Reproducible benchmarks

Benchmark Script
~/zanith/engine $ git clone <engine-repo>
~/zanith/engine $ cd packages/benchmarks
~/zanith/engine/benchmarks $ npm install

# Run the 1000-model massive graph benchmark
~/zanith/engine/benchmarks $ npx tsx run.ts --type massive

✓ Generating 1000 interconnected models... (done)
✓ Compiling runtime graph...
> Zanith Core Initialized in 59.12ms
> Peak RSS Memory: 14.3MB

"Don't trust this page. Run it yourself."