Skip to content

Benchmarks

Comparison of Loggily against pino, winston, and debug.

Test environment: Bun 1.3.9, macOS arm64 (Apple Silicon), 10M iterations per test.

Methodology: All "enabled" benchmarks write to in-process noop sinks (no I/O syscalls) for a fair apples-to-apples comparison of formatting and serialization throughput:

  • Loggily: addWriter(noop) + setSuppressConsole(true) + setOutputMode("writers-only")
  • pino: pino(opts, noopWritableStream)
  • winston: Stream transport with noop Writable

Disabled Debug — Cheap Argument

When debug logging is disabled and arguments are cheap (string literals):

Libraryops/sns/opRelative
noop (baseline)3B0.41.0x
pino2B0.51.3x
Loggily383M2.66.5x
debug43M23.459x
winston3M391.2978x

Pino wins here — its level check is a simple integer comparison without Proxy overhead. Loggily's Proxy-based ?. pattern adds ~2ns overhead for cheap args.

Disabled Debug — Expensive Argument (the real story)

When debug logging is disabled but arguments require evaluation (JSON.stringify):

Libraryops/sns/opRelative
noop (baseline)414M2.41.0x
Loggily248M4.01.7x
pino8M133.155x
debug7M153.364x
winston1M774.6323x

Loggily is 31x faster than pino for disabled calls with expensive arguments. The ?. pattern skips argument evaluation entirely — log.debug?.(\state: ${expensiveArg()}`)never callsexpensiveArg()` when debug is disabled.

This is the key insight: real-world logging often involves string interpolation, JSON.stringify, or computed values. The ?. pattern eliminates this cost entirely.

Enabled Info — Cheap Argument

When info logging is enabled, all loggers writing to noop sinks (fair comparison):

Libraryops/sns/opRelative
Loggily3M371.41.0x
pino2M471.71.3x
winston1M748.32.0x

With a fair noop-sink comparison, Loggily is the fastest for enabled string logging -- ~1.3x faster than pino and ~2x faster than winston.

Enabled Info — Structured Data

Logging with structured data ({ key: "value", count: 42 }), all to noop sinks:

Libraryops/sns/opRelative
Loggily1M668.91.0x
pino1M738.21.1x
winston587K1,703.62.5x

Loggily and pino are neck-and-neck for structured data. Both are roughly 2.5x faster than winston.

Enabled Warn — Error Object

Logging with an Error object, all to noop sinks:

Libraryops/sns/opRelative
Loggily1M990.91.0x
winston839K1191.41.2x
pino541K1848.41.9x

Loggily handles Error objects fastest, nearly 2x faster than pino. Pino's Error serialization is heavier due to its structured JSON pipeline.

Span Creation

Span create + dispose (no output):

Libraryops/sns/op
Loggily2M544.1

~544ns per span lifecycle including ID generation, timing, and disposal. No competitor offers built-in span support for comparison.

Key Takeaways

  1. Disabled + expensive args: Loggily's ?. pattern is 31x faster than pino, 194x faster than winston. This is the main differentiator -- the big win is specifically for disabled logging with expensive argument construction (string interpolation, JSON serialization, computed values), not universal logger throughput.
  2. Disabled + cheap args: Pino is faster due to no Proxy overhead. Both are sub-microsecond -- the difference is negligible in practice.
  3. Enabled + cheap args: Loggily is ~1.3x faster than pino when both write to the same kind of noop sink.
  4. Enabled + structured data: Loggily and pino are comparable; both are ~2x faster than winston.
  5. Enabled + Error objects: Loggily is fastest, ~1.9x faster than pino.
  6. The ?. advantage grows with argument cost: The more expensive your log arguments, the bigger the win.

Note: Pino is optimized for high-throughput enabled JSON logging with transport pipelines. Loggily's biggest advantage is skipping work when logs are disabled. For max-throughput production logging with custom transports, Pino may be a better fit.

Reproducing

bash
# Install benchmark dependencies
bun add -d pino winston debug @types/debug

# Run benchmarks
bun vendor/loggily/benchmarks/overhead.ts

Released under the MIT License.