zanith

Observability

Four plugins that ship with the engine. All built on the same plugin surface. None are wired up by default — register the ones you want.

The four

PluginUse it for
consoleLoggerLocal dev. One line per query to stdout.
slowQueryLoggerProduction alerting. Fires a callback only when a query exceeds a latency threshold.
structuredLoggerProduction log shipping. Emits a LogRecord per query for pino / winston / Datadog.
openTelemetryPluginDistributed tracing. Wraps every query in an OTel span.

consoleLogger — dev

TS
import { consoleLogger } from 'zanith';
 
await createZanith({
adapter, models,
plugins: [consoleLogger({ slowMs: 200 })],
});
 
// · 4ms rows=10 SELECT id, email, name, created_at FROM users WHERE …
// ⚠ slow 312ms rows=1000 SELECT … FROM orders ORDER BY created_at DESC
// ✗ 12ms INSERT INTO users … — duplicate key value violates unique constraint
OptionDefaultEffect
slowMs200Queries above this latency get a ⚠ slow tag.
truncate240Truncate SQL longer than this in the printed line.

slowQueryLogger — alerting

Healthy queries don't spam your logs. Only queries above thresholdMs fire the callback.

TS
import { slowQueryLogger } from 'zanith';
import * as Sentry from '@sentry/node';
 
plugins: [
slowQueryLogger({
thresholdMs: 500,
onSlow: (info) => {
Sentry.captureMessage('slow query', {
level: 'warning',
extra: {
sql: info.sql,
rowCount: info.rowCount,
elapsedMs: info.elapsedMs,
},
});
},
}),
],

structuredLogger — JSON line per query

Pass a sink and you get a LogRecord per query. Wrap your logger in the sink to inject request id, user id, or trace id from AsyncLocalStorage.

TS
import pino from 'pino';
import { structuredLogger } from 'zanith';
 
const log = pino();
 
plugins: [
structuredLogger({
sink: (record) => {
const ctx = requestContext.get();
log[record.level]({ ...record, requestId: ctx?.requestId });
},
warnAboveMs: 200,
}),
],

LogRecord shape

TS
interface LogRecord {
level: 'info' | 'warn' | 'error';
event: 'query' | 'query_error';
sql: string;
paramCount: number;
rowCount?: number;
elapsedMs: number;
error?: { message: string; name: string };
context?: Record<string, unknown>;
}
OptionDefaultEffect
sinkrequiredReceives one record per query. Sync or async.
warnAboveMs200Latency above this becomes level: 'warn'.
maxSqlLength1000Truncate SQL longer than this in records.

openTelemetryPlugin — tracing

The engine doesn't depend on @opentelemetry/api — pass in the tracer (or a wrapper around it) so version pinning stays on the application side.

TS
import { trace } from '@opentelemetry/api';
import { openTelemetryPlugin } from 'zanith';
 
plugins: [
openTelemetryPlugin({
tracer: trace.getTracer('zanith'),
spanName: 'db.query',
includeSql: true,
maxSqlLength: 2000,
}),
],

Span attributes

AttributeValue
db.system'postgres' / 'sqlite' / dialect name
db.operation_kind'execute' or 'executeRaw'
db.statementTruncated SQL — disabled with includeSql: false
db.params.countParameter count (not values)
db.row_countSet on the result hook
db.elapsed_msSet on the result / error hook

Compose them

Loggers compose freely — register every one you want. The engine runs them in registration order and a slow one slows the query, so prefer fire-and-forget sinks.

TS
plugins: [
// Local console output, only when developing.
...(process.env.NODE_ENV === 'development' ? [consoleLogger()] : []),
// Always shipped to the log aggregator.
structuredLogger({ sink: (r) => log.info(r) }),
// Always traced.
openTelemetryPlugin({ tracer: trace.getTracer('zanith') }),
// Slow path goes to Sentry too.
slowQueryLogger({ thresholdMs: 1000, onSlow: shipToSentry }),
],