Node.js Logging Best Practices with Winston
A production-focused guide to Node.js logging with Winston, covering structured JSON logging, Express middleware, log rotation, sensitive data redaction, correlation IDs, and log aggregation patterns.
Node.js Logging Best Practices with Winston
Overview
Logging is one of those things that separates production-grade Node.js applications from side projects. When your API returns a 500 at 2 AM, structured logs with request context, correlation IDs, and proper severity levels are the difference between a fifteen-minute fix and a four-hour guessing game. This article covers everything you need to build a professional logging system with Winston -- from initial setup through log rotation, sensitive data redaction, and shipping logs to aggregation services.
Prerequisites
- Node.js 18+ installed
- Working knowledge of Express.js routing and middleware
- Basic understanding of JSON and structured data
- Familiarity with npm package management
Why console.log Is Not Enough
Every Node.js developer starts with console.log. It works. It prints things. And it is completely inadequate for production.
Here is what console.log gives you:
User logged in
Order created
Something went wrong
Database error
Here is what you actually need at 2 AM when your monitoring alert fires:
{
"level": "error",
"message": "Database query failed",
"timestamp": "2026-02-08T03:14:22.847Z",
"service": "order-api",
"requestId": "req-a7f3b2c1",
"userId": "usr_4821",
"method": "POST",
"url": "/api/orders",
"duration": 5023,
"error": "ECONNREFUSED",
"query": "INSERT INTO orders (user_id, total) VALUES ($1, $2)",
"pid": 28491,
"hostname": "prod-api-3"
}
The first example tells you nothing. The second tells you exactly what happened, to whom, on which server, how long it took, and what query failed. You can search for it, filter by request ID, correlate it with other services, and alert on patterns.
The problems with console.log in production:
- No log levels. You cannot distinguish between debug noise and critical errors. You cannot filter production logs to show only errors without grepping through thousands of info messages.
- No structure. Free-text logs are nearly impossible to parse, search, or aggregate at scale. Try finding all errors from a specific user across 50 servers with
console.logoutput. - No timestamps.
console.logdoes not include timestamps by default. You are relying on your process manager or container runtime to add them, and their format may not match what your log aggregation tool expects. - No transports. Everything goes to stdout. You cannot write errors to a file, send critical alerts to Slack, or ship structured data to Elasticsearch without building it yourself.
- No metadata. There is no built-in way to attach request IDs, user context, or service names to log entries.
Choosing a Logging Library
The three serious contenders in the Node.js ecosystem are Winston, Pino, and Bunyan. I have used all three in production.
Winston is the most popular and most flexible. It has a rich transport ecosystem, supports custom formats, and handles almost every logging scenario you will encounter. It is not the fastest, but it is fast enough for the vast majority of applications.
Pino is the performance champion. It is significantly faster than Winston because it logs to stdout as newline-delimited JSON and defers formatting to a separate process. If you are building a high-throughput service that logs thousands of entries per second, Pino is worth considering.
Bunyan was ahead of its time with structured JSON logging but has seen minimal maintenance since 2019. I would not start a new project with it.
My recommendation: use Winston. Unless you are logging at extreme volumes (tens of thousands of requests per second), the performance difference is irrelevant. Winston's transport ecosystem, format pipeline, and community support make it the most practical choice. You can always switch to Pino later if benchmarks show logging is a bottleneck -- but in over a decade of building Node.js services, I have never seen that happen.
npm install winston
Winston Setup and Configuration
Here is a minimal Winston setup that already beats console.log:
var winston = require('winston');
var logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
defaultMeta: { service: 'my-api' },
transports: [
new winston.transports.Console()
]
});
logger.info('Server started', { port: 3000 });
logger.error('Database connection failed', { host: 'db.example.com', error: 'ECONNREFUSED' });
Output:
{"level":"info","message":"Server started","port":3000,"service":"my-api","timestamp":"2026-02-08T14:30:01.234Z"}
{"level":"error","message":"Database connection failed","host":"db.example.com","error":"ECONNREFUSED","service":"my-api","timestamp":"2026-02-08T14:30:01.456Z"}
Every log entry is a valid JSON object with a timestamp, level, service name, and your custom metadata. This is already searchable, parseable, and aggregatable.
Log Levels and When to Use Each
Winston supports the following log levels by default, in order of severity (most severe first):
| Level | Priority | When to Use |
|---|---|---|
error |
0 | Something broke. A request failed, a database query errored, an external service is down. Always include the error object and stack trace. |
warn |
1 | Something unexpected happened but the system recovered. Rate limit approaching, deprecated API used, fallback to cache, retry succeeded. |
info |
2 | Normal operational events. Server started, request completed, user logged in, order created. The heartbeat of your application. |
http |
3 | HTTP request/response details. Method, URL, status code, duration. Useful for request logging middleware. |
verbose |
4 | Detailed operational information. More than info but less than debug. Configuration loaded, connection pool stats, cache hit ratios. |
debug |
5 | Developer-level detail for troubleshooting. Variable values, function entry/exit, intermediate computation results. Never enable in production unless actively debugging. |
silly |
6 | Everything. Extremely verbose. Only useful during development of the logging system itself. |
The rule I follow: set the log level to info in production, debug in development, and http in staging. You can change the level at runtime without restarting the process:
// Change log level at runtime (e.g., via an admin endpoint)
logger.level = 'debug';
// Or via environment variable at startup
var logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info'
});
Use levels correctly:
// GOOD: Appropriate level usage
logger.error('Payment processing failed', { orderId: 'ord_123', error: err.message, stack: err.stack });
logger.warn('Rate limit threshold at 80%', { current: 80, limit: 100, ip: req.ip });
logger.info('Order created', { orderId: 'ord_123', userId: 'usr_456', total: 99.99 });
logger.http('Request completed', { method: 'POST', url: '/api/orders', status: 201, duration: 142 });
logger.debug('Cache lookup', { key: 'user:usr_456', hit: true, ttl: 3600 });
// BAD: Wrong levels
logger.error('User logged in'); // Not an error
logger.info('Database connection failed'); // This IS an error
logger.debug('Server started on port 3000'); // This should be info
Structured JSON Logging
Structured logging means every log entry is a machine-parseable data structure -- typically JSON -- rather than a human-readable text string. This is not optional for production systems.
Why structured beats text:
Text log:
[2026-02-08 14:30:01] ERROR: Failed to process order ord_123 for user usr_456 - ECONNREFUSED to payment-service:443
Structured log:
{
"level": "error",
"message": "Failed to process order",
"timestamp": "2026-02-08T14:30:01.234Z",
"orderId": "ord_123",
"userId": "usr_456",
"error": "ECONNREFUSED",
"service": "payment-service",
"port": 443
}
The text log requires regex to extract any field. The structured log lets you query directly: "show me all errors where service equals payment-service in the last hour." Every log aggregation tool -- ELK, Datadog, CloudWatch, Grafana Loki -- works better with structured JSON.
Winston makes this the default with its JSON format:
var winston = require('winston');
var logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss.SSS' }),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.Console()
]
});
For development, you might want human-readable output. Use a conditional format:
var isProduction = process.env.NODE_ENV === 'production';
var devFormat = winston.format.combine(
winston.format.colorize(),
winston.format.timestamp({ format: 'HH:mm:ss' }),
winston.format.printf(function(info) {
var meta = Object.assign({}, info);
delete meta.level;
delete meta.message;
delete meta.timestamp;
var metaStr = Object.keys(meta).length ? ' ' + JSON.stringify(meta) : '';
return info.timestamp + ' ' + info.level + ': ' + info.message + metaStr;
})
);
var prodFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
);
var logger = winston.createLogger({
level: isProduction ? 'info' : 'debug',
format: isProduction ? prodFormat : devFormat,
transports: [
new winston.transports.Console()
]
});
Development output:
14:30:01 info: Server started {"port":3000}
14:30:02 debug: Cache initialized {"size":0,"maxSize":1000}
14:30:05 error: Database query failed {"query":"SELECT * FROM users","error":"ECONNREFUSED"}
Production output:
{"level":"info","message":"Server started","port":3000,"timestamp":"2026-02-08T14:30:01.234Z"}
Creating a Reusable Logger Module
Do not configure Winston in every file. Create a single logger module and import it everywhere:
// lib/logger.js
var winston = require('winston');
var path = require('path');
var isProduction = process.env.NODE_ENV === 'production';
var devFormat = winston.format.combine(
winston.format.colorize(),
winston.format.timestamp({ format: 'HH:mm:ss.SSS' }),
winston.format.printf(function(info) {
var meta = Object.assign({}, info);
delete meta.level;
delete meta.message;
delete meta.timestamp;
delete meta.service;
var metaStr = Object.keys(meta).length ? ' ' + JSON.stringify(meta) : '';
return info.timestamp + ' ' + info.level + ': ' + info.message + metaStr;
})
);
var prodFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
);
var transports = [
new winston.transports.Console({
handleExceptions: true,
handleRejections: true
})
];
// Add file transport in production
if (isProduction) {
transports.push(
new winston.transports.File({
filename: path.join(__dirname, '..', 'logs', 'error.log'),
level: 'error',
maxsize: 10 * 1024 * 1024, // 10MB
maxFiles: 5
}),
new winston.transports.File({
filename: path.join(__dirname, '..', 'logs', 'combined.log'),
maxsize: 10 * 1024 * 1024,
maxFiles: 10
})
);
}
var logger = winston.createLogger({
level: process.env.LOG_LEVEL || (isProduction ? 'info' : 'debug'),
format: isProduction ? prodFormat : devFormat,
defaultMeta: { service: process.env.SERVICE_NAME || 'my-api' },
transports: transports,
exitOnError: false
});
module.exports = logger;
Usage in any file:
var logger = require('./lib/logger');
logger.info('User created', { userId: 'usr_123', email: '[email protected]' });
logger.error('Payment failed', { orderId: 'ord_456', error: err.message, stack: err.stack });
Express.js Request Logging Middleware
Every HTTP request should be logged with method, URL, status code, response time, and a request ID. Here is production-grade request logging middleware:
// middleware/requestLogger.js
var crypto = require('crypto');
var logger = require('../lib/logger');
function requestLogger(req, res, next) {
// Generate or propagate request ID
var requestId = req.headers['x-request-id'] || crypto.randomBytes(8).toString('hex');
req.requestId = requestId;
res.setHeader('X-Request-Id', requestId);
// Capture start time
var startTime = process.hrtime.bigint();
// Log when response finishes
res.on('finish', function() {
var duration = Number(process.hrtime.bigint() - startTime) / 1e6; // Convert to ms
var logData = {
requestId: requestId,
method: req.method,
url: req.originalUrl,
status: res.statusCode,
duration: Math.round(duration * 100) / 100,
contentLength: res.get('Content-Length') || 0,
userAgent: req.get('User-Agent'),
ip: req.ip || req.connection.remoteAddress
};
// Add user ID if authenticated
if (req.user && req.user.id) {
logData.userId = req.user.id;
}
// Choose level based on status code
if (res.statusCode >= 500) {
logger.error('Request failed', logData);
} else if (res.statusCode >= 400) {
logger.warn('Client error', logData);
} else {
logger.http('Request completed', logData);
}
});
next();
}
module.exports = requestLogger;
Register it in your Express app:
var express = require('express');
var requestLogger = require('./middleware/requestLogger');
var app = express();
app.use(requestLogger);
Example output:
{"level":"http","message":"Request completed","requestId":"a3f1b2c4d5e6f7a8","method":"GET","url":"/api/users/123","status":200,"duration":12.45,"contentLength":"284","userAgent":"Mozilla/5.0","ip":"192.168.1.100","service":"my-api","timestamp":"2026-02-08T14:30:01.234Z"}
{"level":"warn","message":"Client error","requestId":"b4c2d3e5f6a7b8c9","method":"POST","url":"/api/users","status":400,"duration":3.21,"contentLength":"128","userAgent":"curl/7.88.1","ip":"192.168.1.100","service":"my-api","timestamp":"2026-02-08T14:30:02.456Z"}
{"level":"error","message":"Request failed","requestId":"c5d3e4f6a7b8c9d0","method":"GET","url":"/api/orders","status":500,"duration":5023.89,"contentLength":"64","userAgent":"axios/1.6.0","ip":"10.0.0.50","service":"my-api","timestamp":"2026-02-08T14:30:07.891Z"}
Logging Sensitive Data Safely
This is non-negotiable: passwords, tokens, API keys, credit card numbers, and PII must never appear in your logs. They end up in log aggregation systems, get indexed, get backed up, and eventually get breached. Build a redaction layer and apply it consistently.
// lib/redact.js
var SENSITIVE_KEYS = [
'password', 'passwd', 'secret', 'token', 'accessToken', 'refreshToken',
'authorization', 'apiKey', 'api_key', 'creditCard', 'credit_card',
'cardNumber', 'card_number', 'cvv', 'ssn', 'socialSecurity',
'private_key', 'privateKey'
];
var SENSITIVE_PATTERNS = [
{ pattern: /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/g, replacement: '[EMAIL_REDACTED]' },
{ pattern: /\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b/g, replacement: '[CARD_REDACTED]' },
{ pattern: /\b\d{3}-\d{2}-\d{4}\b/g, replacement: '[SSN_REDACTED]' }
];
function redact(obj, depth) {
if (depth === undefined) depth = 0;
if (depth > 10) return obj; // Prevent infinite recursion
if (obj === null || obj === undefined) return obj;
if (typeof obj === 'string') return redactString(obj);
if (typeof obj !== 'object') return obj;
if (Array.isArray(obj)) {
return obj.map(function(item) { return redact(item, depth + 1); });
}
var result = {};
Object.keys(obj).forEach(function(key) {
var lowerKey = key.toLowerCase();
var isSensitive = SENSITIVE_KEYS.some(function(sensitiveKey) {
return lowerKey === sensitiveKey.toLowerCase();
});
if (isSensitive) {
result[key] = '[REDACTED]';
} else {
result[key] = redact(obj[key], depth + 1);
}
});
return result;
}
function redactString(str) {
var result = str;
SENSITIVE_PATTERNS.forEach(function(p) {
result = result.replace(p.pattern, p.replacement);
});
return result;
}
module.exports = { redact: redact };
Use it in your logger by creating a custom Winston format:
var redactor = require('./redact');
var redactFormat = winston.format(function(info) {
// Redact all metadata fields
var redacted = redactor.redact(info);
return redacted;
});
var logger = winston.createLogger({
format: winston.format.combine(
redactFormat(),
winston.format.timestamp(),
winston.format.json()
),
transports: [new winston.transports.Console()]
});
// Test it
logger.info('User login attempt', {
email: '[email protected]',
password: 'hunter2',
token: 'eyJhbGciOiJIUzI1NiJ9.secret'
});
Output:
{"level":"info","message":"User login attempt","email":"[EMAIL_REDACTED]","password":"[REDACTED]","token":"[REDACTED]","timestamp":"2026-02-08T14:30:01.234Z"}
A simpler approach if you just want to redact request bodies in your middleware:
function sanitizeBody(body) {
if (!body) return undefined;
var clean = Object.assign({}, body);
var sensitiveFields = ['password', 'token', 'secret', 'apiKey', 'creditCard', 'ssn', 'cvv'];
sensitiveFields.forEach(function(field) {
if (clean[field] !== undefined) {
clean[field] = '[REDACTED]';
}
});
return clean;
}
Multiple Transports
Winston's transport system lets you send different log levels to different destinations. This is one of its strongest features.
var winston = require('winston');
var path = require('path');
var logger = winston.createLogger({
level: 'debug',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'order-api' },
transports: [
// Console: all levels in dev, info+ in prod
new winston.transports.Console({
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
format: process.env.NODE_ENV === 'production'
? winston.format.json()
: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
// Error log file: only errors
new winston.transports.File({
filename: path.join(__dirname, 'logs', 'error.log'),
level: 'error',
maxsize: 10 * 1024 * 1024,
maxFiles: 5,
tailable: true
}),
// Combined log file: everything at info level and above
new winston.transports.File({
filename: path.join(__dirname, 'logs', 'combined.log'),
level: 'info',
maxsize: 50 * 1024 * 1024,
maxFiles: 10,
tailable: true
})
]
});
For external services, Winston has community transports for nearly everything:
npm install winston-transport-sentry-node # Sentry
npm install winston-elasticsearch # Elasticsearch
npm install winston-cloudwatch # AWS CloudWatch
npm install @datadog/winston # Datadog
Example adding a Sentry transport for errors only:
var SentryTransport = require('winston-transport-sentry-node').default;
logger.add(new SentryTransport({
sentry: {
dsn: process.env.SENTRY_DSN
},
level: 'error'
}));
Log Rotation with winston-daily-rotate-file
In production, log files grow without bound unless you rotate them. The winston-daily-rotate-file transport handles this automatically:
npm install winston-daily-rotate-file
var winston = require('winston');
var DailyRotateFile = require('winston-daily-rotate-file');
var rotateTransport = new DailyRotateFile({
filename: 'logs/app-%DATE%.log',
datePattern: 'YYYY-MM-DD',
zippedArchive: true, // Compress rotated files
maxSize: '50m', // Rotate when file exceeds 50MB
maxFiles: '30d', // Keep logs for 30 days
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
)
});
var errorRotateTransport = new DailyRotateFile({
filename: 'logs/error-%DATE%.log',
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '20m',
maxFiles: '90d', // Keep error logs longer
level: 'error'
});
// Listen for rotation events
rotateTransport.on('rotate', function(oldFilename, newFilename) {
logger.info('Log file rotated', { oldFile: oldFilename, newFile: newFilename });
});
rotateTransport.on('archive-complete', function(zipFilename) {
logger.info('Log file archived', { file: zipFilename });
});
var logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'my-api' },
transports: [
new winston.transports.Console(),
rotateTransport,
errorRotateTransport
]
});
module.exports = logger;
Your logs directory will look like this:
logs/
app-2026-02-06.log.gz
app-2026-02-07.log.gz
app-2026-02-08.log (current)
error-2026-02-06.log.gz
error-2026-02-07.log.gz
error-2026-02-08.log (current)
Error Logging with Stack Traces and Context
When an error occurs, the stack trace alone is not enough. You need the request context, the user, the input data, and ideally the state of the system when the error happened.
function logError(err, context) {
var logData = {
error: {
message: err.message,
name: err.name,
code: err.code || err.errorCode,
stack: err.stack
}
};
// Merge context
if (context) {
Object.keys(context).forEach(function(key) {
logData[key] = context[key];
});
}
// Add system state
logData.memory = {
heapUsed: Math.round(process.memoryUsage().heapUsed / 1024 / 1024) + 'MB',
rss: Math.round(process.memoryUsage().rss / 1024 / 1024) + 'MB'
};
logData.uptime = Math.round(process.uptime()) + 's';
logger.error(err.message, logData);
}
// Usage in Express error middleware
app.use(function(err, req, res, next) {
logError(err, {
requestId: req.requestId,
method: req.method,
url: req.originalUrl,
userId: req.user ? req.user.id : null,
body: sanitizeBody(req.body),
query: req.query,
ip: req.ip
});
var statusCode = err.statusCode || 500;
res.status(statusCode).json({
error: {
message: statusCode === 500 ? 'Internal server error' : err.message,
requestId: req.requestId
}
});
});
Output for a database error:
{
"level": "error",
"message": "Connection terminated unexpectedly",
"error": {
"message": "Connection terminated unexpectedly",
"name": "Error",
"code": "ECONNRESET",
"stack": "Error: Connection terminated unexpectedly\n at Connection.con.once (/app/node_modules/pg/lib/client.js:132:73)\n at Object.onceWrapper (node:events:628:26)\n at Connection.emit (node:events:513:28)"
},
"requestId": "a3f1b2c4d5e6f7a8",
"method": "GET",
"url": "/api/orders/ord_789",
"userId": "usr_456",
"ip": "192.168.1.100",
"memory": { "heapUsed": "87MB", "rss": "124MB" },
"uptime": "43200s",
"service": "order-api",
"timestamp": "2026-02-08T03:14:22.847Z"
}
Child Loggers for Request-Scoped Context
Instead of passing request metadata to every log call manually, use Winston's child logger feature to create a logger that automatically includes request context:
// middleware/requestContext.js
var logger = require('../lib/logger');
var crypto = require('crypto');
function requestContext(req, res, next) {
var requestId = req.headers['x-request-id'] || crypto.randomBytes(8).toString('hex');
req.requestId = requestId;
res.setHeader('X-Request-Id', requestId);
// Create a child logger with request context baked in
req.log = logger.child({
requestId: requestId,
method: req.method,
url: req.originalUrl
});
next();
}
module.exports = requestContext;
Now every log call through req.log automatically includes the request ID, method, and URL:
app.get('/api/orders/:id', asyncHandler(async function(req, res) {
req.log.info('Fetching order', { orderId: req.params.id });
var order = await db.getOrder(req.params.id);
if (!order) {
req.log.warn('Order not found', { orderId: req.params.id });
return res.status(404).json({ error: 'Order not found' });
}
req.log.info('Order retrieved', { orderId: order.id, total: order.total });
res.json({ data: order });
}));
Every log entry from this request automatically includes requestId, method, and url without you specifying them:
{"level":"info","message":"Fetching order","orderId":"ord_789","requestId":"a3f1b2c4","method":"GET","url":"/api/orders/ord_789","service":"my-api","timestamp":"2026-02-08T14:30:01.234Z"}
{"level":"info","message":"Order retrieved","orderId":"ord_789","total":149.99,"requestId":"a3f1b2c4","method":"GET","url":"/api/orders/ord_789","service":"my-api","timestamp":"2026-02-08T14:30:01.248Z"}
Correlation IDs Across Microservices
In a microservice architecture, a single user action might touch five different services. A correlation ID (also called a trace ID) ties all those log entries together across service boundaries.
The pattern is simple: propagate the request ID through HTTP headers when calling downstream services.
var axios = require('axios');
var logger = require('./lib/logger');
function createServiceClient(serviceName, baseURL) {
var client = axios.create({
baseURL: baseURL,
timeout: 10000
});
// Propagate correlation ID on every outgoing request
client.interceptors.request.use(function(config) {
// Attach correlation ID from the current request context
if (config.correlationId) {
config.headers['X-Request-Id'] = config.correlationId;
}
return config;
});
// Log outgoing requests
client.interceptors.response.use(
function(response) {
logger.http('Downstream call succeeded', {
service: serviceName,
method: response.config.method.toUpperCase(),
url: response.config.url,
status: response.status,
correlationId: response.config.headers['X-Request-Id']
});
return response;
},
function(err) {
logger.error('Downstream call failed', {
service: serviceName,
method: err.config ? err.config.method.toUpperCase() : 'UNKNOWN',
url: err.config ? err.config.url : 'UNKNOWN',
error: err.message,
status: err.response ? err.response.status : null,
correlationId: err.config ? err.config.headers['X-Request-Id'] : null
});
throw err;
}
);
return client;
}
// Usage
var paymentService = createServiceClient('payment-service', 'http://payment-api:3001');
var inventoryService = createServiceClient('inventory-service', 'http://inventory-api:3002');
app.post('/api/orders', asyncHandler(async function(req, res) {
req.log.info('Creating order');
// Both downstream calls carry the same correlation ID
var inventory = await inventoryService.get('/check/' + req.body.productId, {
correlationId: req.requestId
});
var payment = await paymentService.post('/charge', {
amount: req.body.total,
method: req.body.paymentMethod
}, {
correlationId: req.requestId
});
req.log.info('Order created', { orderId: 'ord_new', paymentId: payment.data.id });
res.status(201).json({ orderId: 'ord_new' });
}));
Now you can search for requestId: "a3f1b2c4" in your log aggregation tool and see the complete trace across all three services:
[order-api] info: Creating order requestId=a3f1b2c4
[order-api] http: Downstream call succeeded service=inventory-service requestId=a3f1b2c4
[inventory-api] info: Stock check requestId=a3f1b2c4 product=prod_123 available=true
[order-api] http: Downstream call succeeded service=payment-service requestId=a3f1b2c4
[payment-api] info: Payment processed requestId=a3f1b2c4 amount=49.99 status=success
[order-api] info: Order created requestId=a3f1b2c4 orderId=ord_new
Performance Considerations
Logging is I/O. Done carelessly, it becomes a bottleneck. Here are the things that actually matter.
Use asynchronous transports. Winston's console and file transports are asynchronous by default -- logger.info() returns immediately and the write happens in the background. Do not use synchronous logging in request-handling paths.
Buffer writes to disk. The file transport writes each log entry individually by default. For high-throughput applications, buffer writes:
var transport = new winston.transports.File({
filename: 'logs/combined.log',
options: { flags: 'a' }, // Append mode
maxsize: 50 * 1024 * 1024,
maxFiles: 10
});
Avoid logging large objects. Serializing a 10KB object to JSON on every request adds up. Log only the fields you need:
// BAD: Logging entire request and response objects
logger.info('Request', { req: req, res: res }); // Massive, circular references will throw
// GOOD: Log only what you need
logger.info('Request', { method: req.method, url: req.url, status: res.statusCode });
Control log volume with levels. If you are generating 100GB of logs per day, you are probably logging too much at the wrong level. Keep debug disabled in production. Use info for operational events and error for failures.
Sample verbose logging. If you need debug-level visibility in production but cannot afford the volume, sample:
function shouldSample(rate) {
return Math.random() < rate;
}
app.use(function(req, res, next) {
// Log detailed request info for 1% of requests
if (shouldSample(0.01)) {
logger.debug('Sampled request detail', {
headers: req.headers,
query: req.query,
body: sanitizeBody(req.body)
});
}
next();
});
Log Aggregation Patterns
Running tail -f on a log file works for one server. When you have multiple instances, containers, or microservices, you need log aggregation.
ELK Stack (Elasticsearch + Logstash + Kibana):
Ship JSON logs to Logstash via Filebeat or directly via the Winston Elasticsearch transport:
npm install winston-elasticsearch
var ElasticsearchTransport = require('winston-elasticsearch');
var esTransport = new ElasticsearchTransport({
level: 'info',
clientOpts: {
node: process.env.ELASTICSEARCH_URL || 'http://localhost:9200'
},
indexPrefix: 'app-logs',
indexSuffixPattern: 'YYYY-MM-DD',
transformer: function(logData) {
return {
'@timestamp': logData.timestamp,
severity: logData.level,
message: logData.message,
service: logData.meta.service,
fields: logData.meta
};
}
});
logger.add(esTransport);
Datadog:
npm install @datadog/winston
var DatadogTransport = require('@datadog/winston');
logger.add(new DatadogTransport({
apiKey: process.env.DATADOG_API_KEY,
hostname: require('os').hostname(),
service: 'my-api',
ddsource: 'nodejs'
}));
AWS CloudWatch:
npm install winston-cloudwatch
var CloudWatchTransport = require('winston-cloudwatch');
logger.add(new CloudWatchTransport({
logGroupName: '/app/my-api',
logStreamName: require('os').hostname() + '-' + process.pid,
awsRegion: process.env.AWS_REGION || 'us-east-1',
jsonMessage: true,
retentionInDays: 30
}));
Container-based (Docker/Kubernetes): The simplest approach is to log JSON to stdout and let the container runtime handle aggregation. Most container platforms (ECS, Kubernetes, DigitalOcean App Platform) capture stdout/stderr and route it to their logging infrastructure. In this pattern, you only need the Console transport:
// For containerized deployments, stdout is all you need
var logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.Console()
]
});
What to Log and What NOT to Log
Always log:
- Application startup and shutdown events
- Every HTTP request (method, URL, status, duration, request ID)
- Authentication events (login, logout, failed attempts)
- Business events (order created, payment processed, user registered)
- Errors with full stack traces and request context
- External service calls (URL, status, duration)
- Configuration changes and deployments
Never log:
- Passwords, secrets, API keys, tokens
- Credit card numbers, CVVs, SSNs
- Full JWT tokens (log a hash of the last 8 characters if you need to correlate)
- Health check requests (they create enormous noise -- filter them out)
- Request/response bodies by default (log them only at debug level, redacted)
- Large binary data or file contents
// Filter out health check noise
app.use(function(req, res, next) {
if (req.path === '/health' || req.path === '/ready') {
return next(); // Skip logging for health checks
}
// Your request logging middleware here
requestLogger(req, res, next);
});
Complete Working Example
Here is a production Express.js application with a complete Winston logging setup: console + file transports, request logging with timing, error logging with context, log rotation, and environment-specific configuration.
// app.js -- Production Logging Example with Winston
var express = require('express');
var http = require('http');
var crypto = require('crypto');
var path = require('path');
var winston = require('winston');
var DailyRotateFile = require('winston-daily-rotate-file');
// ============================================================
// Logger Configuration
// ============================================================
var isProduction = process.env.NODE_ENV === 'production';
var devFormat = winston.format.combine(
winston.format.colorize(),
winston.format.timestamp({ format: 'HH:mm:ss.SSS' }),
winston.format.printf(function(info) {
var meta = Object.assign({}, info);
delete meta.level;
delete meta.message;
delete meta.timestamp;
delete meta.service;
var keys = Object.keys(meta);
var metaStr = keys.length > 0 ? ' ' + JSON.stringify(meta) : '';
return info.timestamp + ' ' + info.level + ': ' + info.message + metaStr;
})
);
var prodFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
);
var transports = [
new winston.transports.Console({
level: isProduction ? 'info' : 'debug',
format: isProduction ? prodFormat : devFormat,
handleExceptions: true,
handleRejections: true
})
];
if (isProduction) {
transports.push(
new DailyRotateFile({
filename: path.join(__dirname, 'logs', 'app-%DATE%.log'),
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '50m',
maxFiles: '30d',
level: 'info',
format: prodFormat
}),
new DailyRotateFile({
filename: path.join(__dirname, 'logs', 'error-%DATE%.log'),
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '20m',
maxFiles: '90d',
level: 'error',
format: prodFormat
})
);
}
var logger = winston.createLogger({
level: process.env.LOG_LEVEL || (isProduction ? 'info' : 'debug'),
format: prodFormat,
defaultMeta: { service: process.env.SERVICE_NAME || 'my-api' },
transports: transports,
exitOnError: false
});
// ============================================================
// Sensitive Data Redaction
// ============================================================
var SENSITIVE_KEYS = ['password', 'token', 'secret', 'apiKey', 'api_key',
'authorization', 'creditCard', 'cardNumber', 'cvv', 'ssn'];
function sanitizeBody(body) {
if (!body) return undefined;
var clean = Object.assign({}, body);
Object.keys(clean).forEach(function(key) {
var lowerKey = key.toLowerCase();
var isSensitive = SENSITIVE_KEYS.some(function(sk) {
return lowerKey === sk.toLowerCase();
});
if (isSensitive) {
clean[key] = '[REDACTED]';
}
});
return clean;
}
// ============================================================
// Express App
// ============================================================
var app = express();
app.use(express.json({ limit: '10kb' }));
// Request ID middleware
app.use(function(req, res, next) {
var requestId = req.headers['x-request-id'] || crypto.randomBytes(8).toString('hex');
req.requestId = requestId;
res.setHeader('X-Request-Id', requestId);
// Create child logger with request context
req.log = logger.child({
requestId: requestId
});
next();
});
// Request logging middleware (skip health checks)
app.use(function(req, res, next) {
if (req.path === '/health') return next();
var startTime = process.hrtime.bigint();
res.on('finish', function() {
var duration = Number(process.hrtime.bigint() - startTime) / 1e6;
var logData = {
method: req.method,
url: req.originalUrl,
status: res.statusCode,
duration: Math.round(duration * 100) / 100,
ip: req.ip || req.connection.remoteAddress,
userAgent: req.get('User-Agent')
};
if (res.statusCode >= 500) {
logger.error('Request failed', Object.assign(logData, { requestId: req.requestId }));
} else if (res.statusCode >= 400) {
logger.warn('Client error', Object.assign(logData, { requestId: req.requestId }));
} else {
logger.http('Request completed', Object.assign(logData, { requestId: req.requestId }));
}
});
next();
});
// ============================================================
// Async Handler Wrapper
// ============================================================
function asyncHandler(fn) {
return function(req, res, next) {
Promise.resolve(fn(req, res, next)).catch(next);
};
}
// ============================================================
// Simulated Database
// ============================================================
var users = {
'usr_1': { id: 'usr_1', name: 'Alice Chen', email: '[email protected]', role: 'admin' },
'usr_2': { id: 'usr_2', name: 'Bob Park', email: '[email protected]', role: 'user' }
};
var orders = {
'ord_1': { id: 'ord_1', userId: 'usr_1', total: 149.99, status: 'completed' },
'ord_2': { id: 'ord_2', userId: 'usr_2', total: 29.99, status: 'pending' }
};
// ============================================================
// Routes
// ============================================================
app.get('/health', function(req, res) {
res.json({ status: 'healthy', uptime: Math.round(process.uptime()) });
});
app.get('/api/users/:id', asyncHandler(async function(req, res) {
req.log.info('Fetching user', { userId: req.params.id });
var user = users[req.params.id];
if (!user) {
req.log.warn('User not found', { userId: req.params.id });
return res.status(404).json({ error: 'User not found' });
}
req.log.debug('User data retrieved', { userId: user.id, role: user.role });
res.json({ data: user });
}));
app.post('/api/users', asyncHandler(async function(req, res) {
req.log.info('Creating user', { email: req.body.email });
if (!req.body.name || !req.body.email) {
req.log.warn('Validation failed for user creation', {
hasName: !!req.body.name,
hasEmail: !!req.body.email
});
return res.status(400).json({
error: 'Validation failed',
details: {
name: req.body.name ? null : 'Name is required',
email: req.body.email ? null : 'Email is required'
}
});
}
var id = 'usr_' + Date.now();
var user = { id: id, name: req.body.name, email: req.body.email, role: 'user' };
users[id] = user;
req.log.info('User created', { userId: id });
res.status(201).json({ data: user });
}));
app.get('/api/orders/:id', asyncHandler(async function(req, res) {
req.log.info('Fetching order', { orderId: req.params.id });
var order = orders[req.params.id];
if (!order) {
req.log.warn('Order not found', { orderId: req.params.id });
return res.status(404).json({ error: 'Order not found' });
}
res.json({ data: order });
}));
// Simulate an error
app.get('/api/error', asyncHandler(async function(req, res) {
req.log.info('About to trigger an error');
throw new Error('Simulated database connection failure');
}));
// ============================================================
// 404 Handler
// ============================================================
app.use(function(req, res, next) {
res.status(404).json({
error: 'Not Found',
path: req.originalUrl,
requestId: req.requestId
});
});
// ============================================================
// Error Middleware
// ============================================================
app.use(function(err, req, res, next) {
var statusCode = err.statusCode || 500;
// Log with full context
var logMeta = {
requestId: req.requestId,
method: req.method,
url: req.originalUrl,
ip: req.ip,
error: {
message: err.message,
name: err.name,
stack: err.stack,
code: err.code
},
body: sanitizeBody(req.body)
};
if (statusCode >= 500) {
logMeta.memory = {
heapUsed: Math.round(process.memoryUsage().heapUsed / 1024 / 1024) + 'MB'
};
logger.error('Unhandled error', logMeta);
} else {
logger.warn('Client error', logMeta);
}
res.status(statusCode).json({
error: statusCode === 500 && isProduction
? 'Internal server error'
: err.message,
requestId: req.requestId
});
});
// ============================================================
// Server Startup
// ============================================================
var PORT = process.env.PORT || 3000;
var server = http.createServer(app);
server.listen(PORT, function() {
logger.info('Server started', {
port: PORT,
environment: process.env.NODE_ENV || 'development',
nodeVersion: process.version,
pid: process.pid
});
});
// ============================================================
// Graceful Shutdown
// ============================================================
var connections = new Set();
var isShuttingDown = false;
server.on('connection', function(conn) {
connections.add(conn);
conn.on('close', function() { connections.delete(conn); });
});
function gracefulShutdown(signal) {
if (isShuttingDown) return;
isShuttingDown = true;
logger.info('Shutting down', { signal: signal, openConnections: connections.size });
server.close(function() {
logger.info('Server closed. Exiting.');
process.exit(0);
});
connections.forEach(function(conn) { conn.end(); });
setTimeout(function() {
logger.error('Shutdown timed out. Forcing exit.');
process.exit(1);
}, 15000);
}
process.on('SIGTERM', function() { gracefulShutdown('SIGTERM'); });
process.on('SIGINT', function() { gracefulShutdown('SIGINT'); });
Run it:
npm install express winston winston-daily-rotate-file
node app.js
Test the endpoints:
# Normal request
curl http://localhost:3000/api/users/usr_1
# Not found
curl http://localhost:3000/api/users/usr_999
# Create a user
curl -X POST http://localhost:3000/api/users \
-H "Content-Type: application/json" \
-d '{"name": "Charlie", "email": "[email protected]"}'
# Validation error
curl -X POST http://localhost:3000/api/users \
-H "Content-Type: application/json" \
-d '{}'
# Trigger an error
curl http://localhost:3000/api/error
# With a custom request ID
curl -H "X-Request-Id: my-trace-123" http://localhost:3000/api/orders/ord_1
Development console output:
14:30:01.234 info: Server started {"port":3000,"environment":"development","nodeVersion":"v20.17.0","pid":12345}
14:30:05.100 info: Fetching user {"userId":"usr_1","requestId":"a3f1b2c4"}
14:30:05.101 debug: User data retrieved {"userId":"usr_1","role":"admin","requestId":"a3f1b2c4"}
14:30:05.102 http: Request completed {"method":"GET","url":"/api/users/usr_1","status":200,"duration":2.34,"requestId":"a3f1b2c4"}
14:30:08.200 info: Fetching user {"userId":"usr_999","requestId":"b4c2d3e5"}
14:30:08.201 warn: User not found {"userId":"usr_999","requestId":"b4c2d3e5"}
14:30:08.202 warn: Client error {"method":"GET","url":"/api/users/usr_999","status":404,"duration":1.89,"requestId":"b4c2d3e5"}
14:30:12.300 error: Unhandled error {"requestId":"c5d3e4f6","method":"GET","url":"/api/error","error":{"message":"Simulated database connection failure","name":"Error","stack":"Error: Simulated..."}}
14:30:12.301 error: Request failed {"method":"GET","url":"/api/error","status":500,"duration":0.98,"requestId":"c5d3e4f6"}
Common Issues and Troubleshooting
1. "TypeError: winston.createLogger is not a function"
TypeError: winston.createLogger is not a function
at Object.<anonymous> (/app/lib/logger.js:5:24)
You are using Winston v2 syntax with a Winston v3 installation, or vice versa. Winston v3 changed the API significantly. The createLogger function was introduced in v3. If you see this, check your version:
npm ls winston
If you are on v2, upgrade:
npm install winston@latest
If you are stuck on v2, use new winston.Logger() instead of winston.createLogger().
2. "ENOENT: no such file or directory, open 'logs/app-2026-02-08.log'"
Error: ENOENT: no such file or directory, open '/app/logs/app-2026-02-08.log'
at Object.openSync (node:fs:601:3)
Winston's file transport does not create directories automatically. You need to create the logs/ directory before starting the application:
mkdir -p logs
Or create it programmatically at startup:
var fs = require('fs');
var logDir = path.join(__dirname, 'logs');
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true });
}
3. "Maximum call stack size exceeded" when logging objects with circular references.
RangeError: Maximum call stack size exceeded
at JSON.stringify (<anonymous>)
at /app/node_modules/winston/lib/winston/transports/console.js:45:15
Express req and res objects have circular references. If you accidentally log them directly, JSON serialization crashes:
// THIS WILL CRASH
logger.info('Request received', { req: req });
// FIX: Log only the fields you need
logger.info('Request received', { method: req.method, url: req.url, ip: req.ip });
If you must log complex objects that might have circular references, add a safe serializer format:
var safeFormat = winston.format(function(info) {
try {
JSON.stringify(info);
return info;
} catch (err) {
return Object.assign({}, info, {
_serializationError: 'Object contained circular references'
});
}
});
4. "Transport already attached" or duplicate log entries appearing.
warn: Transport already attached: console, assign a unique 'name' to transports
This happens when you call logger.add() multiple times with the same transport type, or when your logger module gets required from multiple paths (CommonJS caching resolved by different paths). Ensure your logger is a singleton:
// BAD: Creates a new transport every time the module loads
logger.add(new winston.transports.Console());
// GOOD: Only add transports in the initial configuration
var logger = winston.createLogger({
transports: [new winston.transports.Console()]
});
// Do not call logger.add() elsewhere
If you see duplicate log lines, also check that you have not accidentally created two loggers or added the same transport twice in conditional logic.
5. Log files growing without bound despite configuring maxsize.
Winston's built-in File transport maxsize rotates the file once it exceeds the limit, but does not delete old files unless maxFiles is also set. And it only creates numbered suffixes (.1, .2, etc.), not date-based filenames. Use winston-daily-rotate-file instead for proper date-based rotation with automatic cleanup:
// Built-in File transport: limited rotation
new winston.transports.File({
filename: 'app.log',
maxsize: 10 * 1024 * 1024, // Rotates to app.log.1, app.log.2...
maxFiles: 5 // Keeps only 5 rotated files
});
// Better: winston-daily-rotate-file
new DailyRotateFile({
filename: 'logs/app-%DATE%.log',
maxSize: '50m',
maxFiles: '30d' // Auto-delete files older than 30 days
});
Best Practices
Use structured JSON logging from day one. Switching from text to structured logging after you have 100 routes is painful. Start with JSON, and every tool in your observability stack will thank you. The time you invest upfront saves exponentially more time debugging production issues.
Create a single logger module and import it everywhere. Do not scatter
winston.createLogger()calls across your codebase. One module, one configuration, one place to change log levels or add transports. Your logger module is infrastructure -- treat it that way.Always attach a request ID to every log entry. When a user reports an error, the request ID is the thread that connects their report to your logs. Use child loggers to make this automatic rather than manual. Propagate the ID to downstream services for end-to-end tracing.
Never log sensitive data. Build redaction into your logging pipeline as a format transform, not as something individual developers have to remember. Passwords, tokens, API keys, and PII in your logs are a breach waiting to happen.
Use log levels correctly and consistently. Errors are for things that broke. Warnings are for things that might break. Info is for normal operations. Debug is for developers. If everything is logged at
info, you have no way to filter noise from signal during an incident.Log to stdout in containerized environments. If you are deploying to Docker, Kubernetes, or any managed platform, write JSON to stdout and let the platform handle log collection and routing. Adding file transports inside containers adds complexity with no benefit.
Filter out health check requests. Your load balancer hits
/healthevery 10 seconds. That is 8,640 log entries per day per instance of pure noise. Filter them out of your request logging middleware.Set up log rotation for file-based logging. Use
winston-daily-rotate-filewithmaxSizeandmaxFilesconfigured. Unrotated log files have filled disks and caused outages more times than I can count. This is a five-minute configuration that prevents a category of production incidents.Test your logging in your test suite. Create a test transport that captures log entries in memory, and assert that your error handlers log the right level with the right metadata. If you refactor your error handling and break your logging, your tests should catch it.
Monitor your log volume. If your application suddenly starts generating 10x the normal log volume, that is often a symptom of an error loop or a debug level accidentally enabled in production. Set up alerts on log volume, not just log content.
