The Guide
Clarity without the clutter. Ergonomic unified logs, spans, and debugs for modern TypeScript.
Your first app uses console.log. That's enough for a script, a prototype, a small server. Then your app grows. You need structured logs for production, the debug package for conditional verbose output, a tracing library for timings, maybe OpenTelemetry for distributed traces — and suddenly you're juggling three tools with three APIs, three configuration schemes, and three output formats.
Loggily is one library where structured logging, debug-style conditional output, and timed spans all share the same namespace tree, the same output pipeline, and the same ?. pattern for near-zero cost disabled logging. You adopt each capability when you need it. Nothing is wasted, nothing conflicts, nothing clutters your code.
Level 1: Just Log
You need structured logging with levels. One import, one function.
import { createLogger } from "loggily"
const log = createLogger("myapp")
log.info?.("server started", { port: 3000 })
log.warn?.("disk space low", { free: "2GB" })
log.error?.(new Error("connection failed"))Notice the ?. -- if a log level is disabled, the entire call is skipped, including argument evaluation. For trivial arguments the overhead difference is negligible, but for real-world logging with string interpolation and serialization, this is typically 10x+ faster because it skips the work entirely.
Colorized in your terminal, with source locations:
14:32:15 INFO myapp server started {port: 3000}
14:32:15 WARN myapp disk space low {free: "2GB"}
14:32:15 ERROR myapp connection failed
Error: connection failed
at server.ts:42Set LOG_FORMAT=json or NODE_ENV=production and the same calls produce structured JSON — same data, machine-parseable, ready for Datadog or Elastic or whatever your ops team uses:
{ "time": "2024-01-15T14:32:15.123Z", "level": "info", "name": "myapp", "msg": "server started", "port": 3000 }You never choose between human-readable and machine-parseable. You get both from the same call.
The wall: Your app has 20 modules. You need verbose output from the database layer but not from the HTTP layer. LOG_LEVEL=debug turns on everything.
Level 2: Namespaces
Loggers form a tree. Child loggers inherit their parent's namespace and props:
const log = createLogger("myapp")
const db = log.logger("db") // myapp:db
const http = log.logger("http") // myapp:http
const query = db.logger("query") // myapp:db:query
db.debug?.("connecting") // myapp:db
query.debug?.("SELECT * FROM...") // myapp:db:queryNow you can target output. DEBUG auto-lowers the log level to debug and restricts all output to matching namespaces:
DEBUG=myapp:db bun run app # Only myapp:db namespace (all levels)
DEBUG='myapp:*,-myapp:http' bun run app # Everything except HTTP
LOG_LEVEL=debug bun run app # Debug level globally, all namespacesDEBUG is a namespace visibility filter inspired by the debug package — same patterns, same muscle memory — but as part of a full logging system with levels, structured data, and JSON output. Use LOG_LEVEL when you want to change the verbosity floor without restricting namespaces.
The wall: A request takes 3 seconds. You know it's slow, but you don't know which part.
Level 3: Spans
A span is a logger with a timer. It measures how long a block takes, and every log inside it inherits its context:
{
using span = log.span("import", { file: "data.csv" })
span.info?.("parsing rows")
span.spanData.count = 42
}
// -> SPAN myapp:import (1234ms) {count: 42, file: "data.csv"}The using keyword (TC39 Explicit Resource Management) automatically calls span[Symbol.dispose]() at block exit. The span measures its duration and reports it along with any attributes you set. No try/finally, no manual timing, no separate tracing SDK.
Spans nest. Each span gets a unique ID and shares its parent's trace ID, so you can correlate events across a request:
{
using req = log.span("request", { path: "/api/users" })
{
using db = req.span("db-query")
// db.spanData.traceId === req.spanData.traceId
// db.spanData.parentId === req.spanData.id
}
}Control span output independently from logs:
TRACE=1 bun run app # All spans
TRACE=myapp:db bun run app # Only database spans
TRACE=myapp:db,myapp:cache bun run app # Database + cache spansThe wall: Now you need logs sent elsewhere — a file, Datadog, your tracing backend — not just the console.
Level 4: Writers
The writer system is a simple function interface. Write once, send anywhere:
import { addWriter, createFileWriter } from "loggily"
// File writer with buffered auto-flush
const file = createFileWriter("/var/log/app.log")
addWriter((formatted, level) => file.write(formatted))
// Send to an HTTP endpoint
addWriter((formatted, level) => {
if (level === "error") fetch("/api/alerts", { method: "POST", body: formatted })
})
// Send spans to your tracing backend
addWriter((formatted, level) => {
if (level === "span") sendToJaeger(JSON.parse(formatted))
})You can attach multiple writers — each one receives every log and span. The logger doesn't care where the output goes; it just produces structured data. You decide where to send it.
Output modes let you control the default output:
import { setOutputMode } from "loggily"
setOutputMode("writers-only") // Only writers, no console
setOutputMode("stderr") // Bypass Ink/React console capture
setOutputMode("console") // Default: console.log/warn/errorThe wall: You spawn worker threads for heavy processing, but their logs vanish from the main output.
Level 5: Workers
Worker threads get their own loggers that forward to the main thread:
// worker.ts
import { createWorkerLogger } from "loggily/worker"
const log = createWorkerLogger(postMessage, "myapp:worker")
log.info?.("processing chunk", { size: 1000 })
{
using span = log.span("process")
// ...
}// main.ts
import { createWorkerLogHandler } from "loggily/worker"
const handler = createWorkerLogHandler()
worker.on("message", (msg) => handler(msg))Logs and spans from workers appear in the same output stream with the same formatting. No interleaving, no lost messages.
The wall: You need child loggers that carry request context through async call chains without passing the logger everywhere.
Level 6: Context
Child loggers carry structured context through async call chains. Create one at the request boundary, and every downstream log inherits its fields:
const reqLog = log.child({ requestId: "abc-123", userId: 42 })
reqLog.info?.("handling request")
// -> 14:32:15 INFO myapp handling request {requestId: "abc-123", userId: 42}
// Pass reqLog to downstream functions -- context propagates
await handleAuth(reqLog)
await handleQuery(reqLog)Every log from reqLog and its descendants carries requestId and userId without manual field-passing. In JSON mode, these become top-level fields — perfect for filtering in your log aggregator.
What You Have
Normally, you'd pull in one library for logs, another for debug prints, a tracing SDK for spans — and struggle to tie them together. With Loggily, these aren't separate concerns. They're modes of the same tool.
At this point you've replaced that patchwork with a single library:
- Structured logging with levels, namespaces, colorized dev output, JSON production output, and source locations
- Debug output with
DEBUG=namespace:*filtering — thedebugpackage's power, integrated - Span timing with
usingkeyword, nested traces, and independentTRACE=control - Flexible output via writers — file, HTTP, tracing backends, anything
- Worker thread support with automatic forwarding
- Context propagation via child loggers
All sharing one namespace tree. All respecting the same log levels. All using the same ?. pattern — disabled calls are skipped entirely, including argument evaluation. There when you need it, invisible when you don't.
~3KB. Zero dependencies. Modern TypeScript.