Core Concepts

Performance

evlog adds ~7µs per request. Faster than pino, consola, and winston in most scenarios while emitting richer, more useful events.

evlog adds ~7µs of overhead per request — that's 0.007ms, orders of magnitude below any HTTP framework or database call. Performance is tracked on every pull request via CodSpeed.

evlog vs alternatives

All benchmarks run with JSON output to no-op destinations. pino writes to /dev/null (sync), winston writes to a no-op stream, consola uses a no-op reporter, evlog uses silent mode.

Results

Scenarioevlogpinoconsolawinston
Simple string log1.02M ops/s472.8K689.7K373.3K
Structured (5 fields)818.5K ops/s283.4K476.5K131.9K
Deep nested log854.9K ops/s171.3K287.5K62.2K
Burst (100 logs)9.0K ops/s4.6K8.9K2.2K
Logger creation7.60M ops/s2.41M121.5K1.76M
Wide event lifecycle86.2K ops/s88.4K34.9K

evlog wins 5 out of 6 head-to-head comparisons. The only scenario where pino edges ahead is the wide event lifecycle — but the difference is within noise (1.03x), and evlog emits 1 correlated event where pino emits 4 separate log lines.

Why this matters: in production, evlog sends 75% less data to your log drain while giving you a single, queryable event per request instead of 4 disconnected lines to correlate.

What is the "wide event lifecycle"?

This benchmark simulates a real API request:

const log = createLogger({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
log.set({ user: { id: 'usr_123', plan: 'pro' } })
log.set({ cart: { items: 3, total: 9999 } })
log.set({ payment: { method: 'card', last4: '4242' } })
log.emit({ status: 200 })

Same CPU cost, but evlog gives you everything in one place.

Why is evlog faster?

The numbers above aren't magic — they come from deliberate architectural choices:

In-place mutations, not copies. log.set() writes directly into the context object via a recursive mergeInto function. Other loggers clone objects on every call (object spread, Object.assign). evlog never allocates intermediate objects during context accumulation.

No serialization until drain. Context stays as plain JavaScript objects throughout the request lifecycle. JSON.stringify runs exactly once, at emit time. Traditional loggers serialize on every .info() call — that's 4x serialization for 4 log lines.

Lazy allocation. Timestamps, sampling context, and override objects are only created when actually needed. If tail sampling is disabled (the common case), its context object is never allocated. The Date instance used for ISO timestamps is reused across calls.

One event, not N lines. For a typical request, pino emits 4+ JSON lines that all need serializing, transporting, and indexing. evlog emits one. That's 75% less work for your log drain, fewer bytes on the wire, and one row to query instead of four.

RegExp caching. Glob patterns (used in sampling and route matching) are compiled once and cached. Repeated evaluations hit the cache instead of recompiling.

Real-world overhead

For a typical API request:

ComponentCost
Logger creation134ns
3x set() calls361ns
emit()950ns
Sampling69ns
Enricher pipeline5.20µs
Total~6.7µs

For context, a database query takes 1-50ms, an HTTP call takes 10-500ms. evlog's overhead is invisible.

Bundle size

Every entry point is tree-shakeable. You only pay for what you import.

EntryGzip
logger3.70 kB
utils1.41 kB
error1.21 kB
enrichers1.92 kB
pipeline1.35 kB
browser1.21 kB

A typical Nuxt setup loads logger + utils — about 5.1 kB gzip. Bundle size is tracked on every PR and compared against the main baseline.

Detailed benchmarks

Logger creation

Operationops/secMean
createLogger() (no context)7.28M137ns
createLogger() (shallow context)7.47M134ns
createLogger() (nested context)6.93M144ns
createRequestLogger()7.44M134ns

Context accumulation (log.set())

Operationops/secMean
Shallow merge (3 fields)3.56M281ns
Shallow merge (10 fields)2.10M476ns
Deep nested merge2.91M343ns
4 sequential calls2.77M361ns

Event emission (log.emit())

Operationops/secMean
Emit minimal event1.05M950ns
Emit with context806.8K1.24µs
Full lifecycle (create + 3 sets + emit)773.2K1.29µs
Emit with error24.1K41.47µs
emit with error is slower because Error.captureStackTrace() is an expensive V8 operation (~40µs). This only triggers when errors are thrown.

Payload scaling

Payloadops/secMean
Small (2 fields)787.8K1.27µs
Medium (50 fields)265.2K3.77µs
Large (200 nested fields)48.5K20.64µs

Sampling

Operationops/secMean
Tail sampling (shouldKeep)14.5M69ns
Full emit with head + tail1.01M988ns

Enrichers

Enricherops/secMean
User Agent (Chrome)922.1K1.08µs
Geo (Vercel)1.88M531ns
Request Size8.46M118ns
Trace Context3.12M321ns
All combined192.4K5.20µs

Error handling

Operationops/secMean
createError()109.5K9.14µs
parseError()14.71M68ns
Round-trip (create + parse)109.1K9.17µs

Methodology & trust

Can you trust these numbers?

Every benchmark in this page is open source and reproducible. The benchmark files live in packages/evlog/bench/ — you can read the exact code, run it on your machine, and verify the results.

All libraries are tested under the same conditions:

  • Same output mode: JSON to a no-op destination (no disk or network I/O measured)
  • Same warmup: each benchmark runs for 500ms after JIT stabilization
  • Same tooling: Vitest bench powered by tinybench
  • Same machine: when comparing libraries, all benchmarks run in the same process on the same hardware

CI regression tracking

Performance regressions are tracked on every pull request via two systems:

  • CodSpeed runs all benchmarks using CPU instruction counting (not wall-clock timing). This eliminates noise from shared CI runners and produces deterministic, reproducible results. Regressions are flagged directly on the PR.
  • Bundle size comparison measures all entry points against the main baseline and posts a size delta report as a PR comment.

Run it yourself

cd packages/evlog

bun run bench                          # all benchmarks
bunx vitest bench bench/comparison/    # vs alternatives only
bun bench/scripts/size.ts              # bundle size