Debugging Memory Leaks in Node.js
A hands-on guide to finding and fixing memory leaks in Node.js applications, covering V8 garbage collection, heap snapshots, Chrome DevTools debugging, and production monitoring strategies.
Debugging Memory Leaks in Node.js
Memory leaks in Node.js are insidious. Your application works fine in development, passes all its tests, and then three days into production the process is consuming 1.8 GB of RAM and response times have cratered. Unlike a crash, a memory leak degrades slowly -- by the time you notice, your users have been suffering for hours. This guide covers everything you need to find and fix memory leaks in Node.js, from understanding how V8 manages memory to capturing heap snapshots in production and interpreting the results.
Prerequisites
- Node.js v18+ installed (examples tested on v20.17.0)
- Familiarity with Express.js and basic Node.js development
- Chrome or Chromium browser (for DevTools heap analysis)
- Basic understanding of how HTTP servers handle concurrent requests
Install the tools we will use throughout this article:
npm install express clinic heapdump v8-profiler-next
How V8 Garbage Collection Works
Before you can fix a memory leak, you need to understand how V8 decides what to keep and what to throw away. V8 uses a generational garbage collector, meaning it divides the heap into regions based on object age.
New Space (Young Generation)
New Space is where objects are born. It is small -- typically 1-8 MB per semi-space -- and is collected frequently using a Scavenge algorithm. Scavenge is a copying collector: it divides New Space into two semi-spaces (From and To), copies surviving objects from From to To, and then swaps the roles. This is fast because most objects die young. If an object survives two scavenge cycles, it gets promoted to Old Space.
Old Space (Old Generation)
Old Space is where long-lived objects end up. It is collected less frequently using Mark-Sweep and Mark-Compact algorithms. Mark-Sweep walks the object graph starting from root references (global scope, stack variables, handles), marks everything reachable, and sweeps away everything else. Mark-Compact additionally defragments the heap by moving surviving objects together, which reduces memory fragmentation but is more expensive.
What This Means for Memory Leaks
A memory leak in V8 terms is simple: an object that your code no longer needs but that V8 can still reach from a root reference. Because V8 only collects unreachable objects, any reference chain from a root to your leaked object keeps it alive indefinitely. The leak is not in V8 -- it is in your code maintaining a reference it should have released.
// This is not a leak - V8 will collect it after the function returns
function processRequest(data) {
var result = heavyComputation(data);
return result.summary;
}
// This IS a leak - the cache grows without bound
var cache = {};
function processRequest(data) {
var key = data.id;
if (!cache[key]) {
cache[key] = heavyComputation(data);
}
return cache[key].summary;
}
Common Memory Leak Patterns
After debugging memory leaks across dozens of production Node.js applications, I have seen the same patterns repeat. Here are the most common ones.
1. Unbounded Caches
The most frequent leak I encounter. A developer adds an in-memory cache to speed up lookups but never adds eviction logic. The cache grows linearly with unique inputs until the process runs out of memory.
var userCache = {};
function getUser(userId, callback) {
if (userCache[userId]) {
return callback(null, userCache[userId]);
}
db.users.findOne({ _id: userId }, function(err, user) {
if (err) return callback(err);
userCache[userId] = user; // Never evicted
callback(null, user);
});
}
Fix: Use an LRU cache with a maximum size, or use WeakRef (covered later).
var LRU = require("lru-cache");
var userCache = new LRU({ max: 500, ttl: 1000 * 60 * 5 });
function getUser(userId, callback) {
var cached = userCache.get(userId);
if (cached) {
return callback(null, cached);
}
db.users.findOne({ _id: userId }, function(err, user) {
if (err) return callback(err);
userCache.set(userId, user);
callback(null, user);
});
}
2. Event Listener Accumulation
Every call to emitter.on() adds a listener. If you register listeners in a request handler or a loop without removing them, you get a leak. Node.js warns you at 11 listeners by default, but many developers suppress this warning instead of fixing the root cause.
var EventEmitter = require("events");
var bus = new EventEmitter();
// LEAK: Every request adds a listener that is never removed
app.get("/stream", function(req, res) {
bus.on("data", function(chunk) {
res.write(chunk);
});
req.on("close", function() {
// Forgot to remove the listener from bus
res.end();
});
});
Fix: Store the listener reference and remove it on cleanup.
app.get("/stream", function(req, res) {
var handler = function(chunk) {
res.write(chunk);
};
bus.on("data", handler);
req.on("close", function() {
bus.removeListener("data", handler);
res.end();
});
});
3. Closures Holding Outer Scope
Closures capture variables from their enclosing scope. If a closure outlives the scope it was created in and references large objects from that scope, those objects cannot be collected.
function createHandler(bigDataBuffer) {
// bigDataBuffer is a 50MB Buffer
var id = bigDataBuffer.slice(0, 16).toString("hex");
// This closure captures the entire scope, including bigDataBuffer
return function handler(req, res) {
res.json({ id: id });
};
}
Even though handler only uses id, V8 may retain bigDataBuffer in the closure context depending on the engine's optimization decisions. The fix is to null out large references explicitly or restructure the code so the closure does not share scope with large objects.
function createHandler(bigDataBuffer) {
var id = bigDataBuffer.slice(0, 16).toString("hex");
bigDataBuffer = null; // Release the reference
return function handler(req, res) {
res.json({ id: id });
};
}
4. Forgotten Timers and Intervals
setInterval callbacks hold references to their closure scope. If you create intervals without clearing them, the callback and everything it references stays alive.
function monitorConnection(socket) {
var interval = setInterval(function() {
if (socket.connected) {
socket.ping();
}
// If socket disconnects, this interval keeps running
// and keeps `socket` in memory
}, 5000);
}
Fix: Clear the interval when the resource is cleaned up.
function monitorConnection(socket) {
var interval = setInterval(function() {
if (socket.connected) {
socket.ping();
}
}, 5000);
socket.on("close", function() {
clearInterval(interval);
});
}
5. Detached DOM-Like Structures (Server-Side)
This pattern is less obvious. You build a tree structure -- a parsed document, a dependency graph, a routing table -- and keep a reference to a child node after the parent is discarded. The child retains a back-reference to the parent, keeping the entire tree alive.
var lastParsedNode = null;
function parseDocument(xml) {
var doc = xmlParser.parse(xml); // 10MB parsed tree
lastParsedNode = doc.body.firstChild; // Holds reference to entire tree
return lastParsedNode.textContent;
}
Detecting Leaks with process.memoryUsage()
The simplest starting point is process.memoryUsage(). It returns an object with four key metrics:
var usage = process.memoryUsage();
console.log({
rss: (usage.rss / 1024 / 1024).toFixed(2) + " MB", // Resident Set Size
heapTotal: (usage.heapTotal / 1024 / 1024).toFixed(2) + " MB", // V8 heap allocated
heapUsed: (usage.heapUsed / 1024 / 1024).toFixed(2) + " MB", // V8 heap used
external: (usage.external / 1024 / 1024).toFixed(2) + " MB" // C++ objects (Buffers)
});
Output on a healthy Express.js app after 10,000 requests:
{ rss: '72.34 MB', heapTotal: '42.18 MB', heapUsed: '38.52 MB', external: '1.86 MB' }
Output on a leaking Express.js app after 10,000 requests:
{ rss: '487.12 MB', heapTotal: '453.67 MB', heapUsed: '441.93 MB', external: '1.92 MB' }
The key indicator is heapUsed growing continuously over time without returning to a baseline after garbage collection. Set up a periodic log to track this:
setInterval(function() {
var mem = process.memoryUsage();
console.log(
"MEMORY",
new Date().toISOString(),
"heap_used_mb=" + (mem.heapUsed / 1024 / 1024).toFixed(2),
"rss_mb=" + (mem.rss / 1024 / 1024).toFixed(2)
);
}, 30000);
If heapUsed climbs 5-10 MB every 30 seconds under steady load, you have a leak.
Using v8.getHeapStatistics() for Deeper Insight
The built-in v8 module provides more granular heap information than process.memoryUsage():
var v8 = require("v8");
function logHeapStats() {
var stats = v8.getHeapStatistics();
console.log({
total_heap_size_mb: (stats.total_heap_size / 1024 / 1024).toFixed(2),
used_heap_size_mb: (stats.used_heap_size / 1024 / 1024).toFixed(2),
heap_size_limit_mb: (stats.heap_size_limit / 1024 / 1024).toFixed(2),
total_available_size_mb: (stats.total_available_size / 1024 / 1024).toFixed(2),
number_of_native_contexts: stats.number_of_native_contexts,
number_of_detached_contexts: stats.number_of_detached_contexts
});
}
The number_of_detached_contexts field is particularly useful. A detached context is a V8 execution context (like a vm.createContext() or a detached iframe in Electron) that is no longer reachable but has not been garbage collected. If this number keeps growing, you have a context leak.
The heap_size_limit tells you how much memory V8 will allocate before crashing with a fatal "CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory" error. By default, this is about 1.5 GB on 64-bit systems. You can increase it with --max-old-space-size:
node --max-old-space-size=4096 app.js
But increasing the limit only buys you time -- it does not fix the leak.
Heap Snapshots with Chrome DevTools
Heap snapshots are the single most powerful tool for diagnosing memory leaks. A heap snapshot captures every object on the V8 heap, its size, and the reference chains (retainers) keeping it alive.
Connecting with --inspect
Start your Node.js process with the --inspect flag:
node --inspect app.js
Output:
Debugger listening on ws://127.0.0.1:9229/a1b2c3d4-e5f6-7890-abcd-ef1234567890
For help, see: https://nodejs.org/en/docs/inspector
Server listening on port 8080
Open Chrome and navigate to chrome://inspect. Your Node.js process should appear under "Remote Target". Click "inspect" to open DevTools.
Capturing Snapshots
The three-snapshot technique is the gold standard for finding leaks:
- Snapshot 1: Baseline after the app has warmed up. Run a few requests first so initial allocations are out of the way.
- Perform the suspected leaking action (e.g., send 1,000 requests to the endpoint you suspect).
- Force garbage collection: In the DevTools Memory tab, click the trash can icon to force a GC.
- Snapshot 2: Capture after the action.
- Repeat the action (another 1,000 requests).
- Force GC and Snapshot 3.
Now compare Snapshot 2 and Snapshot 3. Objects that appear in "Objects allocated between Snapshot 2 and Snapshot 3" and were not collected are your likely leaks.
Understanding Retainers
When you select an object in the heap snapshot, the bottom panel shows its retainers -- the chain of references keeping it alive. Read retainers bottom-to-top: the bottom entry is the root (usually (GC roots) or Window / global), and each entry above it is a property or variable holding a reference.
Key terminology:
- Shallow Size: Memory consumed by the object itself (its properties, internal fields).
- Retained Size: Memory that would be freed if this object were collected -- including everything it is the sole retainer of. If an object has a retained size of 50 MB but a shallow size of 64 bytes, it is holding a reference to something enormous.
Look for objects where retained size is significantly larger than shallow size. These are your leak anchors.
Programmatic Heap Dumps with heapdump
For production environments where you cannot attach Chrome DevTools, use the heapdump module to write snapshot files that you can analyze offline:
var heapdump = require("heapdump");
// Write a snapshot on SIGUSR2 (Linux/macOS)
process.on("SIGUSR2", function() {
var filename = "/tmp/heapdump-" + Date.now() + ".heapsnapshot";
heapdump.writeSnapshot(filename, function(err, filepath) {
if (err) {
console.error("Heap dump failed:", err);
} else {
console.log("Heap dump written to", filepath);
}
});
});
// Or expose via an admin endpoint (protect this in production!)
app.get("/admin/heapdump", function(req, res) {
var filename = "/tmp/heapdump-" + Date.now() + ".heapsnapshot";
heapdump.writeSnapshot(filename, function(err, filepath) {
if (err) {
res.status(500).json({ error: err.message });
} else {
res.json({ file: filepath, size: require("fs").statSync(filepath).size });
}
});
});
Trigger from the command line:
kill -USR2 $(pgrep -f "node app.js")
The snapshot file can be 200-500 MB for a process using 1 GB of heap. Download it and load it into Chrome DevTools (Memory tab > Load).
Using clinic.js for Automated Analysis
clinic.js is a suite of tools from NearForm that automates much of the profiling workflow. The clinic heapprofiler tool is particularly useful for memory leaks.
# Install globally
npm install -g clinic
# Profile your app under load
clinic heapprofiler -- node app.js
In a separate terminal, generate load against your application:
# Using autocannon (install with: npm install -g autocannon)
autocannon -c 50 -d 60 http://localhost:8080/api/users
After you stop the app (Ctrl+C), clinic generates an HTML report that shows heap allocation over time, broken down by object type and allocation site. It highlights the allocation points that contribute most to heap growth -- these are your leak candidates.
The output looks something like:
Analysing data
Generated HTML file is 98279.clinic-heapprofiler/98279.clinic-heapprofiler.html
Open the HTML file in a browser. The flamechart view shows where allocations are happening, and the timeline view shows how the heap grows over time.
Finding Leaks in Express.js Middleware Chains
Express.js middleware is a common source of leaks because middleware functions execute on every request and closures can capture request-scoped data in long-lived structures.
The Pattern
var requestLog = [];
// LEAK: Middleware that stores request data in a module-level array
app.use(function(req, res, next) {
requestLog.push({
method: req.method,
url: req.url,
timestamp: Date.now(),
headers: req.headers, // Headers object is large
body: req.body // Could be very large
});
next();
});
This is obvious when isolated, but in a real codebase, it might be buried in a logging middleware, an analytics tracker, or a debugging tool that was supposed to be temporary.
Middleware-Specific Debugging Strategy
- Add memory logging middleware at the start and end of your middleware chain:
app.use(function(req, res, next) {
req._memBefore = process.memoryUsage().heapUsed;
next();
});
// ... all your other middleware ...
app.use(function(req, res, next) {
var delta = process.memoryUsage().heapUsed - req._memBefore;
if (delta > 1024 * 1024) { // More than 1 MB growth per request
console.log("MEMORY_SPIKE", req.method, req.url, (delta / 1024 / 1024).toFixed(2) + " MB");
}
next();
});
- Use binary search -- disable half your middleware and check if the leak persists. Narrow down until you find the culprit.
WeakRef and FinalizationRegistry for Cache Patterns
Node.js 14+ supports WeakRef and FinalizationRegistry, which let you build caches that do not prevent garbage collection. This is the right tool when you want a cache that helps performance but should not cause memory pressure.
var cache = new Map();
var registry = new FinalizationRegistry(function(key) {
// Called when the cached value is garbage collected
var ref = cache.get(key);
if (ref && ref.deref() === undefined) {
cache.delete(key);
console.log("Cache entry collected:", key);
}
});
function getCachedUser(userId) {
var ref = cache.get(userId);
if (ref) {
var value = ref.deref();
if (value !== undefined) {
return value; // Cache hit
}
}
return null; // Cache miss
}
function setCachedUser(userId, user) {
var ref = new WeakRef(user);
cache.set(userId, ref);
registry.register(user, userId);
}
// Usage
app.get("/users/:id", function(req, res) {
var userId = req.params.id;
var user = getCachedUser(userId);
if (user) {
return res.json(user);
}
db.users.findOne({ _id: userId }, function(err, user) {
if (err) return res.status(500).json({ error: err.message });
if (user) {
setCachedUser(userId, user);
}
res.json(user);
});
});
Important caveats with WeakRef:
WeakRefonly works with objects, not primitives. You cannot weakly reference a string or number.- V8 makes no guarantees about when a weakly-referenced object will be collected. Do not rely on
FinalizationRegistryfor critical cleanup logic. - For most production caches, an LRU cache with explicit TTL and max size is more predictable and easier to reason about.
Production Monitoring with Custom Metrics
In production, you need continuous visibility into memory behavior. Here is a monitoring setup that works with any metrics backend (Prometheus, StatsD, Datadog):
var v8 = require("v8");
function collectMemoryMetrics() {
var mem = process.memoryUsage();
var heap = v8.getHeapStatistics();
return {
heap_used_bytes: mem.heapUsed,
heap_total_bytes: mem.heapTotal,
rss_bytes: mem.rss,
external_bytes: mem.external,
heap_size_limit_bytes: heap.heap_size_limit,
total_available_bytes: heap.total_available_size,
gc_pressure_pct: ((1 - heap.total_available_size / heap.heap_size_limit) * 100).toFixed(1),
detached_contexts: heap.number_of_detached_contexts
};
}
// Expose as a Prometheus-compatible endpoint
app.get("/metrics", function(req, res) {
var m = collectMemoryMetrics();
var output = "";
output += "# HELP nodejs_heap_used_bytes V8 heap used bytes\n";
output += "# TYPE nodejs_heap_used_bytes gauge\n";
output += "nodejs_heap_used_bytes " + m.heap_used_bytes + "\n";
output += "# HELP nodejs_heap_total_bytes V8 total heap bytes\n";
output += "# TYPE nodejs_heap_total_bytes gauge\n";
output += "nodejs_heap_total_bytes " + m.heap_total_bytes + "\n";
output += "# HELP nodejs_rss_bytes Resident set size bytes\n";
output += "# TYPE nodejs_rss_bytes gauge\n";
output += "nodejs_rss_bytes " + m.rss_bytes + "\n";
output += "# HELP nodejs_gc_pressure_pct Percentage of heap limit used\n";
output += "# TYPE nodejs_gc_pressure_pct gauge\n";
output += "nodejs_gc_pressure_pct " + m.gc_pressure_pct + "\n";
res.set("Content-Type", "text/plain");
res.send(output);
});
Set alerts on:
heap_used_bytescrossing 70% ofheap_size_limit_bytes(warning) and 85% (critical)heap_used_bytesincreasing consistently over 6+ hours (trend-based alert)detached_contextsgreater than 0 for more than 10 minutes
Complete Working Example: A Leaky Express.js Application
Let us build an Express.js application with three deliberate memory leaks, walk through the debugging process, and then fix each one.
The Leaky Application
// leaky-server.js
var express = require("express");
var EventEmitter = require("events");
var app = express();
var bus = new EventEmitter();
// ---- LEAK 1: Unbounded request history ----
var requestHistory = [];
app.use(function(req, res, next) {
requestHistory.push({
method: req.method,
url: req.url,
timestamp: Date.now(),
headers: JSON.parse(JSON.stringify(req.headers)),
ip: req.ip
});
next();
});
// ---- LEAK 2: Event listeners never removed ----
app.get("/events", function(req, res) {
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive"
});
var handler = function(data) {
res.write("data: " + JSON.stringify(data) + "\n\n");
};
bus.on("update", handler);
// BUG: If the client disconnects, the handler is never removed
// req.on("close", ...) is missing
});
// ---- LEAK 3: Closure retaining large buffer ----
var processors = {};
app.post("/process/:id", function(req, res) {
var id = req.params.id;
var largeBuffer = Buffer.alloc(1024 * 1024, "x"); // 1 MB buffer
// Simulate processing
var result = { id: id, checksum: largeBuffer.slice(0, 32).toString("hex") };
// Closure captures entire scope, including largeBuffer
processors[id] = function() {
return result;
};
res.json(result);
});
// Emit updates periodically
setInterval(function() {
bus.emit("update", { time: Date.now(), value: Math.random() });
}, 1000);
// Memory monitoring endpoint
app.get("/debug/memory", function(req, res) {
var mem = process.memoryUsage();
res.json({
heapUsed: (mem.heapUsed / 1024 / 1024).toFixed(2) + " MB",
heapTotal: (mem.heapTotal / 1024 / 1024).toFixed(2) + " MB",
rss: (mem.rss / 1024 / 1024).toFixed(2) + " MB",
requestHistoryLength: requestHistory.length,
eventListenerCount: bus.listenerCount("update"),
processorCount: Object.keys(processors).length
});
});
app.listen(3000, function() {
console.log("Leaky server running on port 3000");
});
Generating Load
# Install autocannon if you don't have it
npm install -g autocannon
# Hit the main endpoint 10,000 times
autocannon -c 20 -a 10000 http://localhost:3000/events
# Hit the process endpoint with unique IDs
for i in $(seq 1 5000); do
curl -s -X POST http://localhost:3000/process/$i > /dev/null
done
# Check memory
curl http://localhost:3000/debug/memory
Expected output after load test:
{
"heapUsed": "892.34 MB",
"heapTotal": "921.47 MB",
"rss": "958.22 MB",
"requestHistoryLength": 15000,
"eventListenerCount": 10000,
"processorCount": 5000
}
The Debugging Process
Step 1: Confirm the leak exists.
Start the server with --inspect and monitor memory:
node --inspect leaky-server.js
Open chrome://inspect in Chrome, click "inspect," and go to the Memory tab.
Step 2: Capture baseline snapshot.
Take Snapshot 1 before any load. Note the heap size (approximately 8 MB for a fresh Express app).
Step 3: Generate load and capture second snapshot.
Run the load test above, force GC in DevTools (trash can icon), and take Snapshot 2. Heap is now around 450 MB.
Step 4: Analyze the delta.
In Snapshot 2, change the view dropdown from "Summary" to "Comparison" and select Snapshot 1 as the baseline. Sort by "Size Delta" descending.
You will see:
(string)-- thousands of new strings (from request headers stored inrequestHistory)(object)-- thousands of plain objects(closure)-- thousands of closures (thehandlerfunctions and processor functions)(array)-- therequestHistoryarray itself, plus internal arrays for the EventEmitter listener list
Step 5: Follow the retainer chain.
Click on any of the leaked closures. In the Retainers panel, you will see something like:
handler in function() @123456
[14523] in system / Array @789012
_events.update in EventEmitter @345678
bus in Object @901234
This tells you: handler is stored in an array at index 14523, that array is the _events.update property on the EventEmitter bus, which is a module-level variable. Now you know exactly where the leak is and why.
The Fixed Application
// fixed-server.js
var express = require("express");
var EventEmitter = require("events");
var LRU = require("lru-cache");
var app = express();
var bus = new EventEmitter();
// ---- FIX 1: Bounded circular buffer for request history ----
var REQUEST_HISTORY_MAX = 1000;
var requestHistory = [];
app.use(function(req, res, next) {
if (requestHistory.length >= REQUEST_HISTORY_MAX) {
requestHistory.shift();
}
requestHistory.push({
method: req.method,
url: req.url,
timestamp: Date.now()
// Removed headers and IP - only store what you actually need
});
next();
});
// ---- FIX 2: Remove event listeners on disconnect ----
app.get("/events", function(req, res) {
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive"
});
var handler = function(data) {
res.write("data: " + JSON.stringify(data) + "\n\n");
};
bus.on("update", handler);
req.on("close", function() {
bus.removeListener("update", handler);
res.end();
});
});
// ---- FIX 3: LRU cache with bounded size, no closure over large buffers ----
var processorCache = new LRU({ max: 200, ttl: 1000 * 60 * 10 });
app.post("/process/:id", function(req, res) {
var id = req.params.id;
var largeBuffer = Buffer.alloc(1024 * 1024, "x");
var result = { id: id, checksum: largeBuffer.slice(0, 32).toString("hex") };
// largeBuffer goes out of scope here and can be collected
processorCache.set(id, result); // Store the result, not a closure
res.json(result);
});
setInterval(function() {
bus.emit("update", { time: Date.now(), value: Math.random() });
}, 1000);
app.get("/debug/memory", function(req, res) {
var mem = process.memoryUsage();
res.json({
heapUsed: (mem.heapUsed / 1024 / 1024).toFixed(2) + " MB",
heapTotal: (mem.heapTotal / 1024 / 1024).toFixed(2) + " MB",
rss: (mem.rss / 1024 / 1024).toFixed(2) + " MB",
requestHistoryLength: requestHistory.length,
eventListenerCount: bus.listenerCount("update"),
processorCacheSize: processorCache.size
});
});
app.listen(3000, function() {
console.log("Fixed server running on port 3000");
});
After the same load test on the fixed server:
{
"heapUsed": "42.18 MB",
"heapTotal": "55.93 MB",
"rss": "78.44 MB",
"requestHistoryLength": 1000,
"eventListenerCount": 0,
"processorCacheSize": 200
}
Memory stays flat at roughly 42 MB instead of climbing to 900+ MB. The three fixes reduced peak memory consumption by over 95%.
Common Issues and Troubleshooting
1. "FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory"
<--- Last few GCs --->
[12345:0x5629a40] 184234 ms: Mark-sweep 1496.3 (1520.1) -> 1495.8 (1520.1) MB, 1892.5 / 0.0 ms
[12345:0x5629a40] 186541 ms: Mark-sweep 1497.1 (1520.1) -> 1496.9 (1520.1) MB, 2298.1 / 0.0 ms
<--- JS stacktrace --->
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
This means V8 hit its heap limit and cannot allocate more memory. The GC lines above the error show Mark-sweep barely reclaiming any memory (1496.3 -> 1495.8 MB), confirming a leak. Increasing --max-old-space-size is a temporary bandage. You need to find and fix the leak.
Immediate action: Restart the process and capture a heap snapshot before it runs out of memory again. Add the memory monitoring endpoint shown earlier so you can see the growth rate.
2. "MaxListenersExceededWarning: Possible EventEmitter memory leak detected"
(node:12345) MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
11 update listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
This warning fires when more than 10 listeners are registered for a single event. Do NOT suppress it with emitter.setMaxListeners(0) unless you have verified the listeners are intentional. In most cases, this warning is telling you about a real bug -- you are adding listeners without removing them.
Debug it: Add logging to see where listeners are being added:
var originalOn = bus.on.bind(bus);
bus.on = function(event, listener) {
console.log("LISTENER_ADDED", event, "count=" + (bus.listenerCount(event) + 1), new Error().stack.split("\n")[2].trim());
return originalOn(event, listener);
};
3. Heap snapshot file is 0 bytes or truncated
$ ls -la /tmp/heapdump-*.heapsnapshot
-rw-r--r-- 1 node node 0 Feb 8 14:23 /tmp/heapdump-1707401023456.heapsnapshot
This happens when the process runs out of memory while writing the heap snapshot. Writing a snapshot requires additional memory (roughly 30-50% of the current heap size). If your process is already near its limit, the snapshot write will fail silently or produce a truncated file.
Fix: Either increase --max-old-space-size temporarily to give the snapshot writer room, or capture the snapshot earlier before memory pressure is critical. Alternatively, use v8.writeHeapSnapshot() (built into Node.js 12+), which is more robust:
var v8 = require("v8");
var filename = v8.writeHeapSnapshot();
console.log("Heap snapshot written to", filename);
4. Memory usage stays high after fixing the leak
Before fix: heapUsed = 800 MB
After fix + restart: heapUsed = 45 MB
After fix + NO restart: heapUsed = 750 MB (why??)
V8 does not return memory to the operating system aggressively. Even after a major GC, the heap total remains high because V8 keeps the allocated pages for future use. The heapUsed should drop, but heapTotal and rss may stay elevated. This is normal.
If you need V8 to release memory back to the OS, you can try:
node --max-old-space-size=512 --gc-global app.js
Or trigger manual GC programmatically (requires --expose-gc):
node --expose-gc app.js
if (global.gc) {
global.gc();
console.log("Manual GC triggered, heapUsed:", (process.memoryUsage().heapUsed / 1024 / 1024).toFixed(2) + " MB");
}
Note: Never use --expose-gc in production unless you have a specific reason. Manual GC calls are stop-the-world events that will spike your latency.
5. External memory leak (Buffers, native addons)
If process.memoryUsage().external keeps growing but heapUsed is stable, you have a leak in native/C++ memory -- typically Buffers or native addon allocations that are not being freed.
// Check for Buffer leaks
setInterval(function() {
var mem = process.memoryUsage();
console.log("external_mb=" + (mem.external / 1024 / 1024).toFixed(2),
"arrayBuffers_mb=" + (mem.arrayBuffers / 1024 / 1024).toFixed(2));
}, 10000);
Common causes: streams that are not being consumed or destroyed, database drivers holding connection buffers, or image processing libraries not releasing intermediate buffers. Heap snapshots will not show these -- you need OS-level tools like valgrind (Linux) or the native memory profiler in your addon.
Best Practices
Set heap size limits explicitly. Do not rely on the default. Use
--max-old-space-sizeappropriate for your container's memory. For a container with 1 GB of RAM, set the heap limit to 512-700 MB, leaving room for the OS, native allocations, and the stack.Bound every cache. Every in-memory data structure that grows with input needs a maximum size. Use
lru-cache, a circular buffer, orWeakRef. If you cannot determine a reasonable maximum, the data belongs in Redis or a database, not in-process memory.Always remove event listeners. For every
.on()call, there should be a corresponding.removeListener()or.off()call in a cleanup path. Use theonce()method for listeners that should only fire once. In request handlers, always clean up in thecloseevent.Monitor memory in production from day one. Do not wait until you have a leak to add monitoring. Export
process.memoryUsage()metrics to your monitoring system (Prometheus, Datadog, CloudWatch) and set up trend-based alerts. A slow leak that grows 1 MB per hour will take weeks to notice without automated monitoring.Avoid storing request data in module-level variables. This is the most common Express.js memory leak pattern. If you need request logging, write to a file or send to an external service. If you need an in-memory request log for debugging, use a circular buffer with a hard cap.
Use streams for large payloads. Do not
Buffer.concat()an entire request or response body into memory. Usepipe()to stream data through transforms. A single 100 MB file upload that gets buffered into memory is not a leak per se, but 50 concurrent uploads consuming 5 GB certainly behaves like one.Test for leaks in CI. Run your test suite with a memory budget. After all tests complete, check that
process.memoryUsage().heapUsedis within an expected range. If your tests allocate 200 MB of heap, something is wrong.Be careful with closures in hot paths. Closures in request handlers, middleware, and event listeners are created on every invocation. If they capture large objects from their enclosing scope, those objects cannot be collected until the closure itself is collected. Null out references you no longer need or restructure the code to avoid the capture.
Profile before optimizing. Do not guess where the leak is. Take heap snapshots, use comparison views, follow retainer chains. The actual source of a leak is almost never where you first suspect it.
