Connection Pooling in Node.js: PostgreSQL and MongoDB
A practical guide to connection pooling in Node.js for PostgreSQL and MongoDB, covering pg.Pool, PgBouncer, Mongoose pooling, pool sizing formulas, monitoring, and graceful shutdown patterns.
Connection Pooling in Node.js: PostgreSQL and MongoDB
Connection pooling is the single most impactful optimization you can make to a database-backed Node.js application. Without it, every query opens a new TCP connection, authenticates, executes, and tears down — a process that can take 20-50ms of pure overhead before your query even runs. This article covers how to properly configure, monitor, and troubleshoot connection pools for both PostgreSQL and MongoDB in Node.js applications.
Prerequisites
- Node.js v16+ installed
- Basic familiarity with Express.js
- A running PostgreSQL instance (v12+)
- A running MongoDB instance (v4.4+)
- npm packages:
pg,mongoose,express
npm install pg mongoose express
Why Connection Pooling Matters
Every time your application connects to a database without pooling, this happens:
- TCP three-way handshake — SYN, SYN-ACK, ACK. That is roughly 0.5-1ms on the same data center, 20-80ms across regions.
- TLS negotiation — If you are using SSL (and you should be), add another 1-2 round trips. That is another 1-5ms locally, 40-160ms across regions.
- Authentication — PostgreSQL runs SCRAM-SHA-256 by default. MongoDB uses SCRAM-SHA-1 or SCRAM-SHA-256. Each requires multiple round trips. Another 2-10ms.
- Connection setup — PostgreSQL forks a new backend process per connection. MongoDB allocates a dedicated thread per connection. Memory and CPU are consumed on the server side.
- Teardown — FIN, ACK, FIN, ACK. Graceful TCP close is another round trip.
Add it up and you are spending 25-250ms per query just on connection overhead. With pooling, that overhead is paid once, and subsequent queries reuse existing connections in under 0.1ms.
Here is a quick benchmark to demonstrate the difference:
var { Pool, Client } = require("pg");
// Without pooling - new connection per query
async function withoutPool() {
var start = Date.now();
for (var i = 0; i < 100; i++) {
var client = new Client({
host: "localhost",
database: "testdb",
user: "appuser",
password: "secret"
});
await client.connect();
await client.query("SELECT 1");
await client.end();
}
console.log("Without pool: " + (Date.now() - start) + "ms");
}
// With pooling - reuse connections
async function withPool() {
var pool = new Pool({
host: "localhost",
database: "testdb",
user: "appuser",
password: "secret",
max: 10
});
var start = Date.now();
for (var i = 0; i < 100; i++) {
var client = await pool.connect();
await client.query("SELECT 1");
client.release();
}
console.log("With pool: " + (Date.now() - start) + "ms");
await pool.end();
}
withoutPool().then(function () {
return withPool();
});
Typical output on a local PostgreSQL instance:
Without pool: 4820ms
With pool: 48ms
That is a 100x improvement. On a remote database, the difference is even more dramatic.
Connection Limits Are Real
PostgreSQL defaults to max_connections = 100. MongoDB defaults to roughly 65,536 connections but practically falls over well before that. In production, you often have multiple application instances, background workers, and monitoring tools all competing for connections. Running out of connections means your application throws errors and drops requests.
A pool caps the maximum number of open connections per process, queues requests when all connections are busy, and recycles connections that have been idle too long. It is the gatekeeper between your application and your database.
PostgreSQL Pooling with pg.Pool
The pg library for Node.js ships with a built-in pool implementation that is production-ready. Here is how to configure it properly:
var { Pool } = require("pg");
var pool = new Pool({
host: process.env.PG_HOST || "localhost",
port: parseInt(process.env.PG_PORT, 10) || 5432,
database: process.env.PG_DATABASE || "myapp",
user: process.env.PG_USER || "appuser",
password: process.env.PG_PASSWORD,
// Pool configuration
max: 20, // Maximum connections in the pool
min: 2, // Minimum idle connections to maintain
idleTimeoutMillis: 30000, // Close idle connections after 30 seconds
connectionTimeoutMillis: 5000, // Fail if can't connect within 5 seconds
maxUses: 7500, // Close connection after 7500 queries (prevents memory leaks)
allowExitOnIdle: false // Keep the pool alive even when idle
// SSL configuration for production
// ssl: {
// rejectUnauthorized: true,
// ca: fs.readFileSync("/path/to/ca-cert.pem").toString()
// }
});
Configuration Parameters Explained
max — The most critical parameter. This is the maximum number of connections the pool will create. Set it too low and queries queue up. Set it too high and you exhaust database server resources. Start with 20 per Node.js process and adjust based on monitoring.
min — Minimum number of idle connections to keep open. This eliminates cold-start latency when traffic picks up. Set this to 2-5 for most applications. Setting it to 0 means the pool will fully drain during low traffic, and the first request after a quiet period pays the connection cost.
idleTimeoutMillis — How long a connection can sit idle before the pool closes it. The default of 10000 (10 seconds) is aggressive. I recommend 30000 (30 seconds) for most applications. Set it higher if your traffic is bursty.
connectionTimeoutMillis — How long to wait for a connection before throwing an error. The default of 0 (wait forever) is dangerous in production. Always set this. 5000ms is a reasonable value.
maxUses — Added in pg v8.0. Closes a connection after a set number of queries. This is valuable for preventing slow memory leaks in long-running PostgreSQL backend processes. Set it to 5000-10000.
Using the Pool
There are two patterns for using the pool. The first is the checkout pattern, where you explicitly acquire and release connections:
var express = require("express");
var router = express.Router();
// Checkout pattern - explicit acquire and release
router.get("/users/:id", function (req, res, next) {
var client;
pool.connect()
.then(function (c) {
client = c;
return client.query("SELECT * FROM users WHERE id = $1", [req.params.id]);
})
.then(function (result) {
client.release();
if (result.rows.length === 0) {
return res.status(404).json({ error: "User not found" });
}
res.json(result.rows[0]);
})
.catch(function (err) {
if (client) {
client.release(true); // Pass true to destroy the connection on error
}
next(err);
});
});
The second pattern uses pool.query() directly, which handles checkout and release automatically:
// Convenience pattern - pool manages checkout/release
router.get("/users/:id", function (req, res, next) {
pool.query("SELECT * FROM users WHERE id = $1", [req.params.id])
.then(function (result) {
if (result.rows.length === 0) {
return res.status(404).json({ error: "User not found" });
}
res.json(result.rows[0]);
})
.catch(next);
});
Use pool.query() for single queries. Use the checkout pattern when you need transactions:
// Transaction pattern - checkout is required
async function transferFunds(fromId, toId, amount) {
var client = await pool.connect();
try {
await client.query("BEGIN");
await client.query(
"UPDATE accounts SET balance = balance - $1 WHERE id = $2",
[amount, fromId]
);
await client.query(
"UPDATE accounts SET balance = balance + $1 WHERE id = $2",
[amount, toId]
);
await client.query("COMMIT");
} catch (err) {
await client.query("ROLLBACK");
throw err;
} finally {
client.release();
}
}
Critical rule: Always release connections in a finally block or equivalent. A leaked connection is permanently removed from the pool. Leak enough of them and your pool is exhausted.
External Pooling with PgBouncer
When you have multiple Node.js processes (cluster mode, multiple containers, multiple servers), each running its own pg.Pool, the total connection count multiplies fast. Ten processes with max: 20 means 200 connections hitting PostgreSQL. PgBouncer sits between your application and PostgreSQL, multiplexing many client connections over fewer server connections.
PgBouncer Configuration
; /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
; Pool mode
pool_mode = transaction
; Pool sizing
default_pool_size = 25
min_pool_size = 5
max_client_conn = 1000
max_db_connections = 50
; Timeouts
server_idle_timeout = 600
client_idle_timeout = 0
query_timeout = 30
; Logging
log_connections = 1
log_disconnections = 1
log_pooler_errors = 1
Transaction Mode vs Session Mode
Transaction mode (pool_mode = transaction) — The server connection is returned to the pool after each transaction completes. This gives you the best connection utilization. Use this for stateless applications (which is most Node.js apps). However, you cannot use prepared statements, SET commands, LISTEN/NOTIFY, or advisory locks across queries.
Session mode (pool_mode = session) — The server connection is held for the entire client session. This is essentially pass-through pooling. It is less efficient but supports all PostgreSQL features. Use this only if you need prepared statements or session-level features.
When using PgBouncer in transaction mode, configure your Node.js pool with smaller sizes since PgBouncer handles the multiplexing:
var pool = new Pool({
host: "localhost",
port: 6432, // PgBouncer port, not PostgreSQL
database: "myapp",
user: "appuser",
password: process.env.PG_PASSWORD,
max: 5, // Smaller pool size - PgBouncer handles multiplexing
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000
});
Docker Compose with PgBouncer
version: "3.8"
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
pgbouncer:
image: edoburu/pgbouncer:1.22.0
environment:
DATABASE_URL: postgres://appuser:secret@postgres:5432/myapp
POOL_MODE: transaction
DEFAULT_POOL_SIZE: 25
MAX_CLIENT_CONN: 500
MAX_DB_CONNECTIONS: 50
ports:
- "6432:5432"
depends_on:
- postgres
app:
build: .
environment:
PG_HOST: pgbouncer
PG_PORT: 5432
PG_DATABASE: myapp
PG_USER: appuser
PG_PASSWORD: secret
depends_on:
- pgbouncer
volumes:
pgdata:
MongoDB Connection Pooling with Mongoose
Mongoose manages connection pooling for you through the MongoDB Node.js driver under the hood. The pool is created when you call mongoose.connect() and shared across all models.
var mongoose = require("mongoose");
var mongoOptions = {
// Pool configuration
maxPoolSize: 10, // Maximum connections in the pool (default: 100)
minPoolSize: 2, // Minimum connections to maintain (default: 0)
// Timeouts
serverSelectionTimeoutMS: 5000, // How long to try selecting a server
socketTimeoutMS: 45000, // How long a socket can be idle before closing
connectTimeoutMS: 10000, // How long to wait for initial connection
maxIdleTimeMS: 30000, // How long a connection can be idle
// Write concern
w: "majority",
wtimeoutMS: 2500,
// Retry
retryWrites: true,
retryReads: true
};
mongoose.connect(process.env.MONGODB_URI, mongoOptions)
.then(function () {
console.log("MongoDB connected with pool size: " + mongoOptions.maxPoolSize);
})
.catch(function (err) {
console.error("MongoDB connection failed:", err.message);
process.exit(1);
});
Key Mongoose/MongoDB Pool Parameters
maxPoolSize — The default changed from 5 (in older driver versions) to 100 in the v4.0+ driver. A default of 100 is way too high for most Node.js applications. I recommend starting with 10 per process and adjusting upward based on monitoring.
minPoolSize — Defaults to 0. Setting this to 2-5 means the driver maintains warm connections ready for immediate use. Worth doing if your application handles steady traffic.
serverSelectionTimeoutMS — How long the driver tries to find a suitable server (primary, secondary, etc.) before failing. The default of 30000 (30 seconds) is too long. A request hanging for 30 seconds is useless to the user. Set it to 5000.
maxIdleTimeMS — Introduced in driver v4.0. Closes connections that have been idle for this duration. Set to 30000-60000 for most applications. This is particularly important in environments where the database limits the total number of connections.
MongoDB Native Driver Pool Configuration
If you are using the native MongoDB driver without Mongoose, pool configuration works the same way via MongoClient options:
var { MongoClient } = require("mongodb");
var client = new MongoClient(process.env.MONGODB_URI, {
maxPoolSize: 10,
minPoolSize: 2,
maxIdleTimeMS: 30000,
serverSelectionTimeoutMS: 5000,
connectTimeoutMS: 10000,
socketTimeoutMS: 45000,
retryWrites: true,
retryReads: true,
w: "majority"
});
client.connect()
.then(function () {
var db = client.db("myapp");
console.log("Connected to MongoDB");
})
.catch(function (err) {
console.error("Failed to connect:", err.message);
process.exit(1);
});
One important difference: the native driver creates a separate pool per mongos/replica set member. If you have a 3-member replica set and maxPoolSize: 10, the driver can open up to 30 connections total (10 per member). Keep this in mind when calculating total connection usage.
Pool Sizing Formulas
Getting pool size right is more science than guesswork. Here are the formulas I use:
PostgreSQL Pool Sizing
The general formula for a single-process Node.js application:
pool_size = (number_of_cpu_cores * 2) + number_of_disks
For a 4-core machine with SSDs, that gives you (4 * 2) + 1 = 9. Round up to 10.
But you also need to account for multiple processes:
max_connections_per_process = total_pg_max_connections / number_of_processes
If PostgreSQL has max_connections = 100 and you have 4 Node.js processes plus monitoring and migration tools, a safe allocation is:
Usable connections = 100 - 10 (reserved for admin/monitoring) = 90
Per process = 90 / 4 = 22 connections per process
Set max: 20 per pool to leave some headroom.
MongoDB Pool Sizing
MongoDB connections are lighter weight than PostgreSQL connections, but the same principles apply:
max_pool_size = ceil(expected_concurrent_operations / number_of_processes)
If you expect 50 concurrent database operations and run 4 processes, that is ceil(50 / 4) = 13 connections per process. Round to 15.
For MongoDB Atlas, check your tier's connection limit. An M10 instance allows 1,500 connections. An M0 (free tier) allows only 500.
Queue Depth Considerations
When all connections in the pool are busy, new requests queue up. The queue length tells you if your pool is undersized:
// PostgreSQL pool queue monitoring
setInterval(function () {
console.log("PG Pool Stats:", {
total: pool.totalCount, // Total connections (idle + busy)
idle: pool.idleCount, // Connections available for use
waiting: pool.waitingCount // Requests queued waiting for a connection
});
}, 5000);
If waitingCount is consistently above 0, your pool is too small or your queries are too slow. If idleCount is consistently equal to totalCount, your pool is oversized and wasting database resources.
Monitoring Pool Health
Production applications must monitor pool health. Here is a comprehensive monitoring module:
// poolMonitor.js
var EventEmitter = require("events");
function PoolMonitor(pgPool, mongoConnection, options) {
this.pgPool = pgPool;
this.mongoConnection = mongoConnection;
this.interval = (options && options.interval) || 10000;
this.history = [];
this.maxHistory = (options && options.maxHistory) || 360; // 1 hour at 10s intervals
this.timer = null;
}
PoolMonitor.prototype = Object.create(EventEmitter.prototype);
PoolMonitor.prototype.start = function () {
var self = this;
this.timer = setInterval(function () {
var stats = self.collect();
self.history.push(stats);
if (self.history.length > self.maxHistory) {
self.history.shift();
}
self.emit("stats", stats);
// Alert on concerning conditions
if (stats.pg.waiting > 5) {
self.emit("warning", {
type: "pg_queue_depth",
message: "PostgreSQL pool has " + stats.pg.waiting + " waiting requests",
stats: stats.pg
});
}
}, this.interval);
};
PoolMonitor.prototype.stop = function () {
if (this.timer) {
clearInterval(this.timer);
this.timer = null;
}
};
PoolMonitor.prototype.collect = function () {
var pgStats = {
total: this.pgPool.totalCount,
idle: this.pgPool.idleCount,
waiting: this.pgPool.waitingCount,
utilization: this.pgPool.totalCount > 0
? ((this.pgPool.totalCount - this.pgPool.idleCount) / this.pgPool.totalCount * 100).toFixed(1) + "%"
: "0%"
};
var mongoStats = {};
if (this.mongoConnection && this.mongoConnection.client) {
// Access the underlying topology for connection info
try {
var serverDescription = this.mongoConnection.client.topology;
mongoStats = {
readyState: this.mongoConnection.readyState, // 0=disconnected, 1=connected, 2=connecting, 3=disconnecting
host: this.mongoConnection.host,
port: this.mongoConnection.port
};
} catch (e) {
mongoStats = { readyState: this.mongoConnection.readyState };
}
}
return {
timestamp: new Date().toISOString(),
pg: pgStats,
mongo: mongoStats
};
};
PoolMonitor.prototype.getStats = function () {
return {
current: this.collect(),
history: this.history.slice(-10) // Last 10 snapshots
};
};
module.exports = PoolMonitor;
Pool Events and Error Handling
Both pg.Pool and Mongoose emit events that you should listen to in production:
PostgreSQL Pool Events
var pool = new Pool({ /* config */ });
// Connection acquired from pool
pool.on("acquire", function (client) {
// Called each time a client is checked out
});
// New connection created
pool.on("connect", function (client) {
console.log("New PG connection created. Total:", pool.totalCount);
});
// Connection returned to pool
pool.on("release", function (err, client) {
if (err) {
console.error("Connection released with error:", err.message);
}
});
// Connection removed from pool
pool.on("remove", function (client) {
console.log("PG connection removed. Total:", pool.totalCount);
});
// Critical: handle pool errors
pool.on("error", function (err, client) {
console.error("Unexpected PG pool error:", err.message);
// Don't exit - the pool will handle reconnection
// But do alert your monitoring system
});
Mongoose Connection Events
var mongoose = require("mongoose");
mongoose.connection.on("connected", function () {
console.log("Mongoose connected to MongoDB");
});
mongoose.connection.on("disconnected", function () {
console.warn("Mongoose disconnected from MongoDB");
});
mongoose.connection.on("reconnected", function () {
console.log("Mongoose reconnected to MongoDB");
});
mongoose.connection.on("error", function (err) {
console.error("Mongoose connection error:", err.message);
});
// Monitor the underlying driver's connection pool events
mongoose.connection.on("open", function () {
var client = mongoose.connection.getClient();
client.on("connectionPoolCreated", function (event) {
console.log("MongoDB pool created:", event.address);
});
client.on("connectionPoolCleared", function (event) {
console.log("MongoDB pool cleared:", event.address);
});
client.on("connectionCheckedOut", function (event) {
// High frequency - use for debugging only
});
client.on("connectionCheckOutFailed", function (event) {
console.error("MongoDB checkout failed:", event.reason);
});
});
Graceful Shutdown and Connection Draining
When your application shuts down (deploy, scale down, crash recovery), you need to drain connections properly. Abruptly killing connections can leave transactions in a bad state and orphan server-side resources.
// gracefulShutdown.js
function setupGracefulShutdown(server, pgPool, mongooseConnection) {
var isShuttingDown = false;
function shutdown(signal) {
if (isShuttingDown) return;
isShuttingDown = true;
console.log("Received " + signal + ". Starting graceful shutdown...");
// Step 1: Stop accepting new connections
server.close(function () {
console.log("HTTP server closed. No new requests accepted.");
});
// Step 2: Set a hard timeout
var forceTimeout = setTimeout(function () {
console.error("Graceful shutdown timed out. Forcing exit.");
process.exit(1);
}, 30000); // 30 second hard limit
// Step 3: Drain database connections
var pgDrain = pgPool.end().then(function () {
console.log("PostgreSQL pool drained.");
}).catch(function (err) {
console.error("Error draining PG pool:", err.message);
});
var mongoDrain = mongooseConnection.close(false).then(function () {
console.log("MongoDB connection closed.");
}).catch(function (err) {
console.error("Error closing MongoDB:", err.message);
});
Promise.all([pgDrain, mongoDrain])
.then(function () {
console.log("All connections closed. Exiting cleanly.");
clearTimeout(forceTimeout);
process.exit(0);
})
.catch(function (err) {
console.error("Error during shutdown:", err.message);
clearTimeout(forceTimeout);
process.exit(1);
});
}
process.on("SIGTERM", function () { shutdown("SIGTERM"); });
process.on("SIGINT", function () { shutdown("SIGINT"); });
// Handle uncaught exceptions gracefully
process.on("uncaughtException", function (err) {
console.error("Uncaught exception:", err);
shutdown("uncaughtException");
});
}
module.exports = setupGracefulShutdown;
The order matters: stop accepting HTTP requests first, then wait for in-flight requests to complete, then drain database pools. The hard timeout prevents the process from hanging indefinitely if a connection refuses to close.
Connection Pooling in Serverless Environments
Serverless environments (AWS Lambda, Vercel Functions, DigitalOcean Functions) create unique challenges for connection pooling:
- Cold starts create new connections on every function invocation after idle timeout
- Concurrency scaling can spawn hundreds of function instances, each with its own pool
- Frozen containers hold connections open but idle, consuming database slots
PostgreSQL in Serverless
Use an external pooler like PgBouncer, Supabase connection pooler, or Neon's built-in pooler. Never let Lambda functions connect directly to PostgreSQL.
// Lambda handler with connection reuse
var { Pool } = require("pg");
var pool;
function getPool() {
if (!pool) {
pool = new Pool({
connectionString: process.env.DATABASE_URL, // PgBouncer URL
max: 1, // Single connection per Lambda instance
idleTimeoutMillis: 120000, // Match Lambda freeze duration
connectionTimeoutMillis: 5000
});
}
return pool;
}
exports.handler = async function (event) {
var p = getPool();
var result = await p.query("SELECT * FROM products WHERE active = true");
return {
statusCode: 200,
body: JSON.stringify(result.rows)
};
};
Key points: use max: 1 per Lambda instance, declare the pool outside the handler for container reuse, and always route through a connection pooler.
MongoDB in Serverless
MongoDB Atlas Serverless instances handle connection pooling automatically. For standard clusters, cache the connection outside the handler:
var mongoose = require("mongoose");
var cachedConnection = null;
async function connectToMongo() {
if (cachedConnection && cachedConnection.readyState === 1) {
return cachedConnection;
}
await mongoose.connect(process.env.MONGODB_URI, {
maxPoolSize: 1,
serverSelectionTimeoutMS: 5000,
bufferCommands: false // Fail fast if not connected
});
cachedConnection = mongoose.connection;
return cachedConnection;
}
exports.handler = async function (event) {
await connectToMongo();
var Product = mongoose.model("Product");
var products = await Product.find({ active: true }).lean();
return {
statusCode: 200,
body: JSON.stringify(products)
};
};
Multi-Tenant Pooling Strategies
If your application serves multiple tenants, each with their own database, you need a strategy for managing pools across tenants.
Pool-per-Tenant
// tenantPools.js
var { Pool } = require("pg");
var pools = {};
var MAX_POOLS = 50; // Cap total pools to prevent resource exhaustion
function getPoolForTenant(tenantId, connectionConfig) {
if (pools[tenantId]) {
pools[tenantId].lastAccess = Date.now();
return pools[tenantId].pool;
}
// Evict least recently used pool if at capacity
var poolKeys = Object.keys(pools);
if (poolKeys.length >= MAX_POOLS) {
var oldest = poolKeys.reduce(function (a, b) {
return pools[a].lastAccess < pools[b].lastAccess ? a : b;
});
console.log("Evicting pool for tenant:", oldest);
pools[oldest].pool.end();
delete pools[oldest];
}
var pool = new Pool({
host: connectionConfig.host,
database: connectionConfig.database,
user: connectionConfig.user,
password: connectionConfig.password,
max: 5, // Smaller per-tenant pool
idleTimeoutMillis: 60000,
connectionTimeoutMillis: 5000
});
pool.on("error", function (err) {
console.error("Pool error for tenant " + tenantId + ":", err.message);
});
pools[tenantId] = {
pool: pool,
lastAccess: Date.now()
};
return pool;
}
// Cleanup idle pools periodically
setInterval(function () {
var now = Date.now();
var tenantIds = Object.keys(pools);
tenantIds.forEach(function (tenantId) {
if (now - pools[tenantId].lastAccess > 300000) { // 5 minutes idle
console.log("Closing idle pool for tenant:", tenantId);
pools[tenantId].pool.end();
delete pools[tenantId];
}
});
}, 60000);
function closeAllPools() {
var tenantIds = Object.keys(pools);
return Promise.all(tenantIds.map(function (tenantId) {
return pools[tenantId].pool.end();
})).then(function () {
pools = {};
});
}
module.exports = { getPoolForTenant: getPoolForTenant, closeAllPools: closeAllPools };
Schema-per-Tenant (Single Pool)
A simpler approach if all tenants share one PostgreSQL instance: use a single pool and switch schemas per request.
var { Pool } = require("pg");
var pool = new Pool({
host: "localhost",
database: "multi_tenant_app",
user: "appuser",
password: process.env.PG_PASSWORD,
max: 20
});
function queryForTenant(tenantSchema, sql, params) {
return pool.connect().then(function (client) {
return client.query("SET search_path TO " + client.escapeIdentifier(tenantSchema) + ", public")
.then(function () {
return client.query(sql, params);
})
.then(function (result) {
client.release();
return result;
})
.catch(function (err) {
client.release(true); // Destroy connection to reset search_path
throw err;
});
});
}
Note that with PgBouncer in transaction mode, you must issue SET search_path and your query in the same transaction, since the connection may be reassigned between statements.
Complete Working Example
Here is a full Express.js application with both PostgreSQL and MongoDB pools properly configured, monitored, and shut down gracefully:
// app.js
var express = require("express");
var { Pool } = require("pg");
var mongoose = require("mongoose");
var PoolMonitor = require("./poolMonitor");
var app = express();
app.use(express.json());
// ============================================================
// PostgreSQL Pool
// ============================================================
var pgPool = new Pool({
host: process.env.PG_HOST || "localhost",
port: parseInt(process.env.PG_PORT, 10) || 5432,
database: process.env.PG_DATABASE || "myapp",
user: process.env.PG_USER || "appuser",
password: process.env.PG_PASSWORD || "secret",
max: 20,
min: 2,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
maxUses: 7500
});
pgPool.on("error", function (err) {
console.error("PostgreSQL pool error:", err.message);
});
pgPool.on("connect", function () {
console.log("New PG connection. Total:", pgPool.totalCount);
});
// ============================================================
// MongoDB Connection
// ============================================================
mongoose.connect(process.env.MONGODB_URI || "mongodb://localhost:27017/myapp", {
maxPoolSize: 10,
minPoolSize: 2,
serverSelectionTimeoutMS: 5000,
socketTimeoutMS: 45000,
connectTimeoutMS: 10000,
maxIdleTimeMS: 30000,
retryWrites: true,
retryReads: true
}).then(function () {
console.log("MongoDB connected");
}).catch(function (err) {
console.error("MongoDB connection failed:", err.message);
process.exit(1);
});
mongoose.connection.on("disconnected", function () {
console.warn("MongoDB disconnected");
});
mongoose.connection.on("reconnected", function () {
console.log("MongoDB reconnected");
});
// ============================================================
// Pool Monitor
// ============================================================
var monitor = new PoolMonitor(pgPool, mongoose.connection, { interval: 10000 });
monitor.start();
monitor.on("warning", function (warning) {
console.warn("POOL WARNING:", warning.message);
});
// ============================================================
// Mongoose Model
// ============================================================
var logSchema = new mongoose.Schema({
action: String,
userId: Number,
timestamp: { type: Date, default: Date.now },
metadata: mongoose.Schema.Types.Mixed
});
var Log = mongoose.model("Log", logSchema);
// ============================================================
// Routes
// ============================================================
// Get user from PostgreSQL
app.get("/api/users/:id", function (req, res, next) {
pgPool.query("SELECT id, name, email, created_at FROM users WHERE id = $1", [req.params.id])
.then(function (result) {
if (result.rows.length === 0) {
return res.status(404).json({ error: "User not found" });
}
// Log access to MongoDB
var log = new Log({
action: "user_view",
userId: parseInt(req.params.id, 10),
metadata: { ip: req.ip }
});
log.save().catch(function (err) {
console.error("Failed to save log:", err.message);
});
res.json(result.rows[0]);
})
.catch(next);
});
// Search users in PostgreSQL
app.get("/api/users", function (req, res, next) {
var search = req.query.q || "";
var limit = Math.min(parseInt(req.query.limit, 10) || 20, 100);
var offset = parseInt(req.query.offset, 10) || 0;
pgPool.query(
"SELECT id, name, email, created_at FROM users WHERE name ILIKE $1 ORDER BY created_at DESC LIMIT $2 OFFSET $3",
["%" + search + "%", limit, offset]
)
.then(function (result) {
res.json({
users: result.rows,
count: result.rows.length,
offset: offset
});
})
.catch(next);
});
// Get recent logs from MongoDB
app.get("/api/logs", function (req, res, next) {
var limit = Math.min(parseInt(req.query.limit, 10) || 50, 200);
Log.find()
.sort({ timestamp: -1 })
.limit(limit)
.lean()
.then(function (logs) {
res.json(logs);
})
.catch(next);
});
// ============================================================
// Health Check Endpoint with Pool Statistics
// ============================================================
app.get("/health", function (req, res) {
var pgHealthy = pgPool.totalCount > 0 || pgPool.idleCount >= 0;
var mongoHealthy = mongoose.connection.readyState === 1;
var health = {
status: (pgHealthy && mongoHealthy) ? "healthy" : "degraded",
timestamp: new Date().toISOString(),
uptime: process.uptime(),
pools: {
postgresql: {
status: pgHealthy ? "connected" : "disconnected",
totalConnections: pgPool.totalCount,
idleConnections: pgPool.idleCount,
waitingRequests: pgPool.waitingCount,
utilization: pgPool.totalCount > 0
? ((pgPool.totalCount - pgPool.idleCount) / pgPool.totalCount * 100).toFixed(1) + "%"
: "0%",
config: {
max: 20,
min: 2,
idleTimeoutMs: 30000
}
},
mongodb: {
status: ["disconnected", "connected", "connecting", "disconnecting"][mongoose.connection.readyState],
readyState: mongoose.connection.readyState,
host: mongoose.connection.host,
name: mongoose.connection.name,
config: {
maxPoolSize: 10,
minPoolSize: 2
}
}
},
memory: {
rss: (process.memoryUsage().rss / 1024 / 1024).toFixed(1) + " MB",
heapUsed: (process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1) + " MB"
}
};
var statusCode = (pgHealthy && mongoHealthy) ? 200 : 503;
res.status(statusCode).json(health);
});
// Pool statistics history
app.get("/health/pools/history", function (req, res) {
res.json(monitor.getStats());
});
// ============================================================
// Error Handling
// ============================================================
app.use(function (err, req, res, next) {
console.error("Request error:", err.message);
res.status(500).json({ error: "Internal server error" });
});
// ============================================================
// Start Server with Graceful Shutdown
// ============================================================
var PORT = parseInt(process.env.PORT, 10) || 3000;
var server = app.listen(PORT, function () {
console.log("Server running on port " + PORT);
});
// Graceful shutdown
var isShuttingDown = false;
function shutdown(signal) {
if (isShuttingDown) return;
isShuttingDown = true;
console.log("\nReceived " + signal + ". Shutting down gracefully...");
// Stop accepting new requests
server.close(function () {
console.log("HTTP server closed.");
});
// Hard timeout
var forceExit = setTimeout(function () {
console.error("Forced exit after timeout.");
process.exit(1);
}, 30000);
// Stop monitoring
monitor.stop();
// Drain pools
Promise.all([
pgPool.end().then(function () {
console.log("PostgreSQL pool drained. All connections closed.");
}),
mongoose.connection.close(false).then(function () {
console.log("MongoDB connection closed.");
})
]).then(function () {
clearTimeout(forceExit);
console.log("Clean shutdown complete.");
process.exit(0);
}).catch(function (err) {
console.error("Error during shutdown:", err.message);
clearTimeout(forceExit);
process.exit(1);
});
}
process.on("SIGTERM", function () { shutdown("SIGTERM"); });
process.on("SIGINT", function () { shutdown("SIGINT"); });
module.exports = app;
Example health check response:
{
"status": "healthy",
"timestamp": "2026-02-08T14:30:00.000Z",
"uptime": 3847.22,
"pools": {
"postgresql": {
"status": "connected",
"totalConnections": 8,
"idleConnections": 5,
"waitingRequests": 0,
"utilization": "37.5%",
"config": {
"max": 20,
"min": 2,
"idleTimeoutMs": 30000
}
},
"mongodb": {
"status": "connected",
"readyState": 1,
"host": "cluster0-shard-00-02.abc123.mongodb.net",
"name": "myapp",
"config": {
"maxPoolSize": 10,
"minPoolSize": 2
}
}
},
"memory": {
"rss": "82.4 MB",
"heapUsed": "34.2 MB"
}
}
Common Issues and Troubleshooting
1. "Connection terminated unexpectedly"
Error: Connection terminated unexpectedly
at Connection.con.once (/app/node_modules/pg/lib/client.js:132:73)
This happens when PostgreSQL kills a connection (idle timeout, pg_terminate_backend(), server restart). The pool should handle this automatically, but if you see it frequently, check:
- PostgreSQL's
idle_in_transaction_session_timeoutsetting - PgBouncer's
server_idle_timeout - Network load balancer idle timeouts (AWS ALB defaults to 60 seconds)
- Your pool's
idleTimeoutMillis— set it lower than the server's timeout
Fix: Add keepalive to your pool configuration:
var pool = new Pool({
// ... other config
keepAlive: true,
keepAliveInitialDelayMillis: 10000
});
2. "sorry, too many clients already"
error: sorry, too many clients already
at Parser.parseErrorMessage (/app/node_modules/pg-protocol/src/parser.ts:369:69)
PostgreSQL has hit max_connections. This usually means you have too many processes with oversized pools, or connections are leaking.
Diagnosis:
-- Check current connections
SELECT count(*), usename, application_name, state
FROM pg_stat_activity
GROUP BY usename, application_name, state
ORDER BY count DESC;
-- Check max connections setting
SHOW max_connections;
Fix: Reduce max in your pool config, add PgBouncer, or increase max_connections in postgresql.conf (requires restart). Also audit your code for connection leaks — any pool.connect() without a corresponding client.release().
3. "MongoServerSelectionError: connection timed out"
MongoServerSelectionError: connect ETIMEDOUT 10.0.1.42:27017
at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:293:38)
The MongoDB driver cannot reach any server within serverSelectionTimeoutMS. Common causes:
- Network/firewall issues (security group not allowing port 27017)
- MongoDB Atlas IP whitelist does not include your application's IP
- DNS resolution failure for SRV connection strings
- All replica set members are down or unreachable
Fix: Verify network connectivity first. Then check your connection string. For Atlas, ensure your IP is whitelisted. If using mongodb+srv://, verify DNS resolution works:
nslookup _mongodb._tcp.cluster0.abc123.mongodb.net
4. "TimeoutError: ResourceRequest timed out"
TimeoutError: ResourceRequest timed out
at ResourceRequest.handleTimeout (/app/node_modules/pg-pool/index.js:301:11)
All connections in the pg.Pool are checked out and connectionTimeoutMillis has elapsed. The pool is exhausted.
Root causes:
- Long-running queries holding connections
- Connection leak (missing
client.release()) - Pool
maxis too small for your load - A slow query is blocking all connections
Diagnosis:
// Find the leak - log when connections are checked out but not released
var originalConnect = pool.connect.bind(pool);
pool.connect = function () {
var stack = new Error().stack;
return originalConnect().then(function (client) {
var originalRelease = client.release.bind(client);
var released = false;
// Warn if connection isn't released within 30 seconds
var timer = setTimeout(function () {
if (!released) {
console.error("POSSIBLE CONNECTION LEAK. Checked out at:", stack);
}
}, 30000);
client.release = function (destroy) {
released = true;
clearTimeout(timer);
return originalRelease(destroy);
};
return client;
});
};
5. "ECONNRESET" During Deployments
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
This occurs when the database server or a network intermediary (load balancer, PgBouncer) resets the TCP connection. During deployments, the old process's connections may be forcibly terminated.
Fix: Implement retry logic for transient errors:
function queryWithRetry(pool, sql, params, maxRetries) {
maxRetries = maxRetries || 3;
var attempt = 0;
function execute() {
attempt++;
return pool.query(sql, params).catch(function (err) {
var isTransient = (
err.code === "ECONNRESET" ||
err.code === "ECONNREFUSED" ||
err.code === "57P01" || // admin_shutdown
err.code === "57P03" // cannot_connect_now
);
if (isTransient && attempt < maxRetries) {
var delay = Math.pow(2, attempt) * 100; // Exponential backoff
console.warn("Retrying query (attempt " + (attempt + 1) + ") after " + delay + "ms");
return new Promise(function (resolve) {
setTimeout(resolve, delay);
}).then(execute);
}
throw err;
});
}
return execute();
}
Best Practices
Always set
connectionTimeoutMillis— The default of "wait forever" will cause your request to hang indefinitely when the pool is exhausted. Set it to 5000ms and let the request fail fast so the client can retry.Match pool idle timeouts to infrastructure — If your load balancer has a 60-second idle timeout, set your pool's
idleTimeoutMillisto 50 seconds (slightly lower). If PgBouncer hasserver_idle_timeout = 600, setpg.Poolidle timeout lower. Mismatched timeouts cause "connection terminated unexpectedly" errors.Use
maxUsesto prevent memory leaks — PostgreSQL backend processes can slowly leak memory over thousands of queries. SettingmaxUses: 7500recycles connections regularly, keeping memory stable. This is especially important for long-running server processes.Never share a pool connection across async operations — A connection checked out from the pool belongs to one request at a time. Do not store it in module-level state or pass it between unrelated middleware functions. Check it out, use it, release it. Keep the scope tight.
Size pools based on measurement, not guesswork — Start with
(cores * 2) + 1for PostgreSQL and monitorwaitingCount. If it is consistently 0 and most connections are idle, reduce the pool. IfwaitingCountspikes, investigate slow queries before increasing the pool. A bigger pool without faster queries just increases database load.Implement health check endpoints that expose pool stats — Your load balancer's health check should verify database connectivity, not just return 200. The
/healthendpoint in the example above returns pool utilization, waiting count, and connection state. Use this data in your monitoring dashboards and alerting.Separate read and write pools for PostgreSQL replicas — If you have read replicas, create separate pools for reads and writes. Route read-only queries to replicas to distribute load. Each pool can be sized independently based on its workload.
var writePool = new Pool({
host: process.env.PG_PRIMARY_HOST,
max: 10
});
var readPool = new Pool({
host: process.env.PG_REPLICA_HOST,
max: 20 // More capacity for reads
});
Log pool events in production but not pool checkouts — The
connect,remove, anderrorevents are low frequency and useful for debugging. Theacquireandreleaseevents fire on every query and will flood your logs. Enable them only for debugging specific issues.Test connection failures in development — Use tools like
toxiproxyor simply stop your database to verify your application handles pool exhaustion and reconnection gracefully. Your app should return 503 errors, not crash.
