Lex + parse + validate the .zanith source into a runtime graph. Sub-linear at scale because validation reuses tokenization state.
Proof · 01 — The receipts
Real numbers. Sourced. Re-run on every commit.
Every figure on this page comes from a benchmark file in the engine repo, with the source named below it. They re-run on every commit; the page is updated when the numbers move. The disclosures at the bottom are part of the page, not buried.
- 191/192
- tests passing
- 22.9ms
- schema compile · 1k models
- 2.4µs
- SELECT compile
- 88.97KB
- ESM bundle
How the engine behaves at a thousand models.
Indexed lookup against the in-memory model registry. Sub-microsecond per call — well below any human-perceptible threshold.
Total memory footprint of the runtime graph at 1k models. Order of magnitude smaller than a comparably-sized generated client on disk.
Every measured operation, all on one page.
Expression · simple eq
{ field: value }
Expression · AND/OR
{ AND: [a, b], OR: [c, d] }
SELECT · with WHERE
where + projection
findMany · build + compile
args validate + AST + emit
JOIN · projection · WHERE
1 included relation
GROUP BY + COUNT + SUM
aggregate compile
INSERT · single row
RETURNING *
UPSERT · ON CONFLICT DO UPDATE
deduped by unique field
Bulk INSERT · 10 rows
values list
191 of 192 passing. One flaky benchmark, called out plainly.
$ pnpm testvitest run · zanith engine✓ test/compiler/select.test.ts (8)✓ test/compiler/insert.test.ts (5)✓ test/compiler/update.test.ts (4)✓ test/compiler/delete.test.ts (3)✓ test/expression/expr.test.ts (12)✓ test/expression/where.test.ts (18)✓ test/integration/pipeline.test.ts (6)✓ test/integration/typed-client.test.ts (9)✓ test/edge-cases/negative.test.ts (22)✓ test/edge-cases/null-handling.test.ts (8)✓ test/types/end-to-end.test.ts (13)✓ test/types/inference.test.ts (7)✓ test/schema/parser.test.ts (15)✓ test/schema/validator.test.ts (11)✓ test/schema/relations.test.ts (9)✓ test/schema/builder.test.ts (12)✓ test/benchmark/scale.test.ts (6) 427ms⚠ test/benchmark/execution.test.ts (1) 466ms❯ pipeline-stages: ratio assertion (flaky)Test Files 19 passed (19)Tests 191 passed | 1 failed (192)Duration 1.03s
The 192nd test, in plain English
A ratio asserting against a near-zero baseline.
The flaky assertion compares two timings — a relational query versus a near-zero raw-string-concat baseline — and divides them. Because the baseline is so close to zero, tiny CPU jitter swings the ratio by 100x between runs. The per-op check directly above it (< 50µs) is stable and passes every time. We're reporting 191 / 192 rather than rounding to all-green; the flaky check will be rewritten to drop the ratio.
05 — What this page doesn't measure
The numbers above are real. Here's what they aren't.
Per the VOICE.md disclaimer rule: every claim made on this page has an accompanying caveat about what the figure does and does not cover. Four gaps, listed plainly so the credibility is shaped by the absence too.
- Gap · 01scope · by design
End-to-end query time against a live database
Every µs on this page is engine overhead. Network latency, connection setup, and the database's actual execution time are excluded. If you want end-to-end numbers, the local benchmark setup in §06 will produce them.
- Gap · 02in progress
Comparable benchmarks against Prisma, Drizzle, TypeORM
We don't have sourced competitor figures yet. Until we do, the marketing copy uses one consistent hedge: "codegen ORMs at this scale spend minutes regenerating." Specific µs comparisons will land when we can run a matched workload on each.
- Gap · 03planned
Long-running memory churn
The 3.4MB figure is the steady-state graph footprint. We haven't yet measured what happens to memory under continuous schema reloads, sustained query throughput, or under-leak-detection runs over hours.
- Gap · 04downstream concern
Performance on hot-paths inside the DB
The compiler's choice of joins, projections, and parameter shapes affects how the database plans the query. We measure what we emit, not how the planner reacts. The /examples page shows the SQL we generate; production tuning lives downstream.
Reproducible benchmarks
~/zanith/engine $ cd packages/benchmarks
~/zanith/engine/benchmarks $ npm install
# Run the 1000-model massive graph benchmark
~/zanith/engine/benchmarks $ npx tsx run.ts --type massive
✓ Compiling runtime graph...
> Zanith Core Initialized in 59.12ms
> Peak RSS Memory: 14.3MB
"Don't trust this page. Run it yourself."