Observability
Four plugins that ship with the engine. All built on the same plugin surface. None are wired up by default — register the ones you want.
The four
| Plugin | Use it for |
|---|---|
consoleLogger | Local dev. One line per query to stdout. |
slowQueryLogger | Production alerting. Fires a callback only when a query exceeds a latency threshold. |
structuredLogger | Production log shipping. Emits a LogRecord per query for pino / winston / Datadog. |
openTelemetryPlugin | Distributed tracing. Wraps every query in an OTel span. |
consoleLogger — dev
TS
10 linesimport { consoleLogger } from 'zanith'; await createZanith({ adapter, models, plugins: [consoleLogger({ slowMs: 200 })],}); // · 4ms rows=10 SELECT id, email, name, created_at FROM users WHERE …// ⚠ slow 312ms rows=1000 SELECT … FROM orders ORDER BY created_at DESC// ✗ 12ms INSERT INTO users … — duplicate key value violates unique constraint| Option | Default | Effect |
|---|---|---|
slowMs | 200 | Queries above this latency get a ⚠ slow tag. |
truncate | 240 | Truncate SQL longer than this in the printed line. |
slowQueryLogger — alerting
Healthy queries don't spam your logs. Only queries above thresholdMs fire the callback.
TS
18 linesimport { slowQueryLogger } from 'zanith';import * as Sentry from '@sentry/node'; plugins: [ slowQueryLogger({ thresholdMs: 500, onSlow: (info) => { Sentry.captureMessage('slow query', { level: 'warning', extra: { sql: info.sql, rowCount: info.rowCount, elapsedMs: info.elapsedMs, }, }); }, }),],structuredLogger — JSON line per query
Pass a sink and you get a LogRecord per query. Wrap your logger in the sink to inject request id, user id, or trace id from AsyncLocalStorage.
TS
14 linesimport pino from 'pino';import { structuredLogger } from 'zanith'; const log = pino(); plugins: [ structuredLogger({ sink: (record) => { const ctx = requestContext.get(); log[record.level]({ ...record, requestId: ctx?.requestId }); }, warnAboveMs: 200, }),],LogRecord shape
TS
10 linesinterface LogRecord { level: 'info' | 'warn' | 'error'; event: 'query' | 'query_error'; sql: string; paramCount: number; rowCount?: number; elapsedMs: number; error?: { message: string; name: string }; context?: Record<string, unknown>;}| Option | Default | Effect |
|---|---|---|
sink | required | Receives one record per query. Sync or async. |
warnAboveMs | 200 | Latency above this becomes level: 'warn'. |
maxSqlLength | 1000 | Truncate SQL longer than this in records. |
openTelemetryPlugin — tracing
The engine doesn't depend on @opentelemetry/api — pass in the tracer (or a wrapper around it) so version pinning stays on the application side.
TS
11 linesimport { trace } from '@opentelemetry/api';import { openTelemetryPlugin } from 'zanith'; plugins: [ openTelemetryPlugin({ tracer: trace.getTracer('zanith'), spanName: 'db.query', includeSql: true, maxSqlLength: 2000, }),],Span attributes
| Attribute | Value |
|---|---|
db.system | 'postgres' / 'sqlite' / dialect name |
db.operation_kind | 'execute' or 'executeRaw' |
db.statement | Truncated SQL — disabled with includeSql: false |
db.params.count | Parameter count (not values) |
db.row_count | Set on the result hook |
db.elapsed_ms | Set on the result / error hook |
Compose them
Loggers compose freely — register every one you want. The engine runs them in registration order and a slow one slows the query, so prefer fire-and-forget sinks.
TS
10 linesplugins: [ // Local console output, only when developing. ...(process.env.NODE_ENV === 'development' ? [consoleLogger()] : []), // Always shipped to the log aggregator. structuredLogger({ sink: (r) => log.info(r) }), // Always traced. openTelemetryPlugin({ tracer: trace.getTracer('zanith') }), // Slow path goes to Sentry too. slowQueryLogger({ thresholdMs: 1000, onSlow: shipToSentry }),],