DigitalOcean Load Balancers for High Availability
A practical guide to configuring DigitalOcean load balancers for high availability Node.js applications, covering SSL termination, health checks, sticky sessions, WebSocket support, and zero-downtime deployments.
DigitalOcean Load Balancers for High Availability
Overview
A load balancer sits between your users and your backend servers, distributing incoming traffic so that no single server bears the entire load. DigitalOcean Load Balancers are managed, regional L4/L7 devices that handle SSL termination, health checks, sticky sessions, and traffic distribution with zero infrastructure maintenance on your part. If you are running anything in production that needs to survive a server failure or handle more than one machine's worth of traffic, a load balancer is not optional -- it is the first piece of infrastructure you provision.
Prerequisites
- A DigitalOcean account with billing configured
- At least two Droplets (or a DOKS cluster) running your Node.js application
doctlCLI installed and authenticated (doctl auth init)- A domain name with DNS managed through DigitalOcean (for SSL)
- Node.js v18+ installed on your backend servers
- Basic familiarity with Express.js
Load Balancer Fundamentals
L4 vs L7 Load Balancing
DigitalOcean Load Balancers operate at both Layer 4 (transport) and Layer 7 (application), depending on your forwarding rule configuration.
Layer 4 (TCP/UDP) forwards raw TCP connections without inspecting the content. The load balancer sees source IP, destination IP, and port numbers. It does not understand HTTP headers, cookies, or URL paths. Use L4 when you need to load balance non-HTTP protocols, raw TCP connections, or when you want the lowest possible latency overhead.
Layer 7 (HTTP/HTTPS) inspects the HTTP request itself. The load balancer can read headers, set cookies for sticky sessions, and make routing decisions based on the request. Use L7 for web applications, REST APIs, and anything that speaks HTTP.
For Node.js web applications, you almost always want L7. The overhead of HTTP inspection is negligible compared to the features you gain -- sticky sessions, proper health checks at the application layer, and HTTP/2 support.
Load Balancing Algorithms
DigitalOcean supports two algorithms:
Round Robin sends each new connection to the next server in the list. Simple, predictable, and works well when all backends have identical capacity. This is the default and the right choice 90% of the time.
Least Connections sends new connections to the server with the fewest active connections. Use this when your request durations vary significantly -- some endpoints return in 10ms, others take 5 seconds. Least connections prevents a slow endpoint from piling up on one server while others sit idle.
Algorithm Best For Caveat
────────────────────────────────────────────────────────────────────
Round Robin Uniform request durations Can overload slow backends
Least Connections Variable request durations Slightly more overhead
Creating Load Balancers
Using doctl CLI
The fastest way to create a load balancer is through doctl. This command creates a load balancer in the NYC1 region, forwarding HTTP traffic on port 80 to port 3000 on your backend Droplets:
doctl compute load-balancer create \
--name my-app-lb \
--region nyc1 \
--forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000" \
--health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:5" \
--algorithm round_robin \
--droplet-ids 12345678,87654321
Output:
ID IP Name Status Created At
a1b2c3d4-e5f6-7890-abcd-ef1234567890 203.0.113.50 my-app-lb new 2026-02-08T10:00:00Z
The load balancer takes 1-3 minutes to provision and receive its public IP address.
Using the DigitalOcean API
For automation scripts and infrastructure-as-code, the API is more flexible. Here is a Node.js script that creates a load balancer programmatically:
var https = require("https");
var token = process.env.DIGITALOCEAN_TOKEN;
var payload = JSON.stringify({
name: "my-app-lb",
region: "nyc1",
algorithm: "round_robin",
forwarding_rules: [
{
entry_protocol: "http",
entry_port: 80,
target_protocol: "http",
target_port: 3000
},
{
entry_protocol: "https",
entry_port: 443,
target_protocol: "http",
target_port: 3000,
certificate_id: "your-cert-id",
tls_passthrough: false
}
],
health_check: {
protocol: "http",
port: 3000,
path: "/health",
check_interval_seconds: 10,
response_timeout_seconds: 5,
unhealthy_threshold: 3,
healthy_threshold: 5
},
sticky_sessions: {
type: "none"
},
droplet_ids: [12345678, 87654321]
});
var options = {
hostname: "api.digitalocean.com",
path: "/v2/load_balancers",
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + token,
"Content-Length": Buffer.byteLength(payload)
}
};
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
var result = JSON.parse(body);
console.log("Load Balancer ID:", result.load_balancer.id);
console.log("Status:", result.load_balancer.status);
});
});
req.on("error", function(err) {
console.error("Failed to create load balancer:", err.message);
});
req.write(payload);
req.end();
Using Tag-Based Targeting
Instead of hardcoding Droplet IDs, you can target Droplets by tag. This is the right approach for any dynamic environment. When you spin up a new Droplet and tag it, the load balancer automatically includes it in the pool.
# Tag your Droplets
doctl compute droplet tag 12345678 --tag-name web-app
doctl compute droplet tag 87654321 --tag-name web-app
# Create LB targeting the tag
doctl compute load-balancer create \
--name my-app-lb \
--region nyc1 \
--tag-name web-app \
--forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000" \
--health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:5"
This is essential for auto-scaling. When your scaling script creates a new Droplet and applies the web-app tag, traffic starts flowing to it automatically once it passes health checks.
SSL/TLS Termination and Let's Encrypt
How SSL Termination Works
SSL termination means the load balancer handles the TLS handshake, decrypts the traffic, and forwards plain HTTP to your backend servers. Your Node.js application never touches a certificate. This is simpler, faster, and more secure -- certificate management is centralized at one point instead of distributed across every backend.
Automated Let's Encrypt Certificates
DigitalOcean integrates Let's Encrypt directly into the load balancer. The certificate auto-renews. No cron jobs, no certbot, no manual intervention.
# Create a Let's Encrypt certificate
doctl compute certificate create \
--name my-app-cert \
--type lets_encrypt \
--dns-names grizzlypeaksoftware.com,www.grizzlypeaksoftware.com
# List certificates to get the ID
doctl compute certificate list
Output:
ID Name DNS Names SHA-1 Fingerprint Type State Expires At
b2c3d4e5-f6a7-8901-bcde-f12345678901 my-app-cert grizzlypeaksoftware.com,www.grizzlypeaksoftware.com ab:cd:ef:12:34 lets_encrypt verified 2026-05-08T00:00:00Z
Now attach it to your load balancer by adding an HTTPS forwarding rule:
doctl compute load-balancer add-forwarding-rules a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--forwarding-rules "entry_protocol:https,entry_port:443,target_protocol:http,target_port:3000,certificate_id:b2c3d4e5-f6a7-8901-bcde-f12345678901"
HTTP to HTTPS Redirect
Your load balancer should redirect all HTTP traffic to HTTPS. DigitalOcean supports this natively:
doctl compute load-balancer update a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--redirect-http-to-https
With this enabled, any request to http://grizzlypeaksoftware.com returns a 301 redirect to https://grizzlypeaksoftware.com. Your Node.js application does not need to handle this redirect.
Reading Client IP Behind SSL Termination
When the load balancer terminates SSL and forwards plain HTTP, your application sees the load balancer's internal IP as the request source. The real client IP is in the X-Forwarded-For header.
var express = require("express");
var app = express();
// Trust the load balancer proxy
app.set("trust proxy", true);
app.get("/", function(req, res) {
// req.ip now returns the real client IP, not the LB's internal IP
console.log("Client IP:", req.ip);
console.log("Protocol:", req.protocol); // "https" via X-Forwarded-Proto
res.send("Hello from behind the load balancer");
});
Setting trust proxy to true tells Express to read X-Forwarded-For and X-Forwarded-Proto headers. Without this, req.ip returns the load balancer's private IP and req.protocol always shows http.
Health Check Configuration
Health checks are how the load balancer decides whether a backend is alive. Get this wrong and you will either route traffic to dead servers or pull healthy servers out of rotation unnecessarily.
The Health Check Endpoint
Every Node.js application behind a load balancer needs a dedicated health check route. This is not your homepage. It is a lightweight endpoint that confirms your application is actually running and can serve requests.
var express = require("express");
var app = express();
// Simple health check -- returns 200 if the process is alive
app.get("/health", function(req, res) {
res.status(200).json({ status: "healthy", uptime: process.uptime() });
});
For a more thorough check that verifies downstream dependencies:
var mongoose = require("mongoose");
app.get("/health", function(req, res) {
var dbState = mongoose.connection.readyState;
// 0 = disconnected, 1 = connected, 2 = connecting, 3 = disconnecting
if (dbState !== 1) {
return res.status(503).json({
status: "unhealthy",
database: "disconnected",
uptime: process.uptime()
});
}
res.status(200).json({
status: "healthy",
database: "connected",
uptime: process.uptime(),
memory: process.memoryUsage().rss
});
});
Tuning Health Check Parameters
The default health check settings are conservative. Here is what each parameter does and how to tune it:
Parameter Default Recommended Why
─────────────────────────────────────────────────────────────────────────
check_interval_seconds 10 10 How often to check. Lower = faster failover, more traffic
response_timeout_seconds 5 5 Max wait for response before marking failed
unhealthy_threshold 3 3 Consecutive failures before removing from pool
healthy_threshold 5 3 Consecutive successes before re-adding to pool
My recommendation: keep the defaults for check interval and timeout. Reduce healthy_threshold to 3 so recovered servers get back into rotation faster. With these settings, a failed server is removed in 30 seconds (3 failures x 10 second interval) and restored in 30 seconds after recovery.
doctl compute load-balancer update a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:3"
Sticky Sessions and Session Affinity
Sticky sessions ensure that all requests from the same client go to the same backend server. This matters when your application stores session data in memory (which you should eventually move to Redis, but we live in the real world).
Cookie-Based Sticky Sessions
DigitalOcean uses a cookie-based approach. The load balancer sets a cookie on the first response, and subsequent requests from that client include the cookie, telling the load balancer which backend to route to.
doctl compute load-balancer update a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--sticky-sessions "type:cookies,cookie_name:DO-LB-COOKIE,cookie_ttl_seconds:300"
The cookie_ttl_seconds controls how long the affinity lasts. 300 seconds (5 minutes) is a reasonable default for most web applications. Set it to match your session timeout.
When to Use Sticky Sessions
Use sticky sessions when:
- Your application stores session state in process memory (not recommended, but common)
- You are using server-side rendering with session-dependent state
- WebSocket connections need to reconnect to the same server
Do not use sticky sessions when:
- You have externalized session storage (Redis, PostgreSQL)
- Your API is stateless (JWT auth, no server-side sessions)
- You want the most even traffic distribution possible
Sticky sessions create an uneven load distribution by definition. If one user generates far more requests than others, their server handles disproportionate traffic. Externalize your sessions and turn sticky sessions off whenever possible.
Forwarding Rules and Backend Pools
Forwarding rules define how traffic flows through the load balancer. Each rule maps an entry protocol/port to a target protocol/port.
Common Forwarding Rule Configurations
Basic HTTP only:
--forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000"
HTTPS with SSL termination (recommended for most apps):
--forwarding-rules "entry_protocol:https,entry_port:443,target_protocol:http,target_port:3000,certificate_id:CERT_ID" \
--forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000"
TLS passthrough (when your app handles its own SSL):
--forwarding-rules "entry_protocol:https,entry_port:443,target_protocol:https,target_port:3000,tls_passthrough:true"
Multiple services on different ports:
--forwarding-rules "entry_protocol:https,entry_port:443,target_protocol:http,target_port:3000,certificate_id:CERT_ID" \
--forwarding-rules "entry_protocol:tcp,entry_port:8080,target_protocol:tcp,target_port:8080"
Managing Backend Pools
Add or remove Droplets from the pool without downtime:
# Add a new Droplet to the pool
doctl compute load-balancer add-droplets a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--droplet-ids 11223344
# Remove a Droplet (for maintenance)
doctl compute load-balancer remove-droplets a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--droplet-ids 87654321
The load balancer waits for active connections to the removed Droplet to drain before fully removing it. Existing in-flight requests complete normally.
Connecting to Droplets and Kubernetes
Droplet-Based Backends
For Droplet backends, ensure your Node.js application listens on the private network interface, not just 0.0.0.0. The load balancer communicates with backends over the private network for security and to avoid bandwidth charges.
var express = require("express");
var app = express();
var PORT = process.env.PORT || 3000;
// Listen on all interfaces (the LB connects via private network)
app.listen(PORT, "0.0.0.0", function() {
console.log("Server listening on port " + PORT);
});
Make sure your Droplets have private networking enabled:
doctl compute droplet create web-1 \
--size s-2vcpu-4gb \
--image ubuntu-22-04-x64 \
--region nyc1 \
--enable-private-networking \
--tag-name web-app \
--ssh-keys YOUR_KEY_FINGERPRINT
Kubernetes (DOKS) Backends
If you are running on DigitalOcean Kubernetes, you create a load balancer through a Kubernetes Service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: my-app-lb
annotations:
service.beta.kubernetes.io/do-loadbalancer-size-slug: "lb-small"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-cert-id"
service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "3000"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: https
port: 443
targetPort: 3000
protocol: TCP
- name: http
port: 80
targetPort: 3000
protocol: TCP
Apply it and Kubernetes provisions the DigitalOcean Load Balancer automatically:
kubectl apply -f service.yaml
# Watch for the external IP assignment
kubectl get svc my-app-lb --watch
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app-lb LoadBalancer 10.245.0.100 203.0.113.50 443:31234/TCP,80:31235/TCP 2m
Monitoring Load Balancer Metrics
DigitalOcean exposes load balancer metrics through the API and the control panel. The key metrics to watch:
Querying Metrics via API
var https = require("https");
var token = process.env.DIGITALOCEAN_TOKEN;
var lbId = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
// Fetch current connections and throughput
var options = {
hostname: "api.digitalocean.com",
path: "/v2/load_balancers/" + lbId,
method: "GET",
headers: {
"Authorization": "Bearer " + token
}
};
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
var lb = JSON.parse(body).load_balancer;
console.log("Name:", lb.name);
console.log("IP:", lb.ip);
console.log("Status:", lb.status);
console.log("Droplets:", lb.droplet_ids.length);
lb.health_check && console.log("Health Check Path:", lb.health_check.path);
// Check individual Droplet health
if (lb.droplet_ids) {
console.log("Backend Droplet IDs:", lb.droplet_ids.join(", "));
}
});
});
req.end();
Key Metrics to Monitor
Metric What It Tells You Alert Threshold
────────────────────────────────────────────────────────────────────────────────────
Connection rate (per sec) Traffic volume > 80% of plan limit
Active connections Concurrent load > 10,000 sustained
HTTP 5xx rate Backend failures > 1% of total requests
Backend health Servers in rotation < 2 healthy backends
Response time (p99) Latency through LB > 2 seconds
Bandwidth (Mbps) Throughput > 80% of plan limit
Set up alert policies to notify you before you hit capacity:
doctl monitoring alert create \
--type "v1/insights/lbaas/avg_cpu_utilization_percent" \
--compare "GreaterThan" \
--value 80 \
--window "5m" \
--entities a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--emails "[email protected]"
Handling WebSocket Connections
WebSocket connections require special consideration because they are long-lived, stateful connections -- the exact opposite of what load balancers are designed for.
Configuring WebSocket Support
DigitalOcean Load Balancers support WebSocket connections natively when using HTTP/HTTPS forwarding rules. The load balancer detects the Upgrade: websocket header and holds the connection open.
On the Node.js side, here is a WebSocket server behind the load balancer:
var express = require("express");
var http = require("http");
var WebSocket = require("ws");
var app = express();
var server = http.createServer(app);
var wss = new WebSocket.Server({ server: server });
app.get("/health", function(req, res) {
res.status(200).json({ status: "healthy", connections: wss.clients.size });
});
wss.on("connection", function(ws, req) {
var clientIp = req.headers["x-forwarded-for"] || req.socket.remoteAddress;
console.log("WebSocket connection from:", clientIp);
ws.on("message", function(message) {
console.log("Received:", message.toString());
ws.send("Echo: " + message.toString());
});
ws.on("close", function() {
console.log("Client disconnected");
});
});
var PORT = process.env.PORT || 3000;
server.listen(PORT, function() {
console.log("Server with WebSocket support on port " + PORT);
});
WebSocket Timeout Tuning
By default, DigitalOcean Load Balancers have a 60-second idle connection timeout. For WebSocket connections, you need this higher -- or implement ping/pong keepalives on the application side.
The application-side keepalive is more reliable:
var PING_INTERVAL = 30000; // 30 seconds
wss.on("connection", function(ws) {
ws.isAlive = true;
ws.on("pong", function() {
ws.isAlive = true;
});
});
// Ping all clients every 30 seconds
var heartbeat = setInterval(function() {
wss.clients.forEach(function(ws) {
if (ws.isAlive === false) {
return ws.terminate();
}
ws.isAlive = false;
ws.ping();
});
}, PING_INTERVAL);
wss.on("close", function() {
clearInterval(heartbeat);
});
This keeps the connection alive through the load balancer. If a client misses two consecutive pings (60 seconds of silence), the server terminates the connection.
Sticky Sessions for WebSocket Reconnection
When a WebSocket connection drops and the client reconnects, sticky sessions ensure it goes back to the same server. This matters if your WebSocket server maintains per-connection state:
doctl compute load-balancer update a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--sticky-sessions "type:cookies,cookie_name:DO-WS-STICKY,cookie_ttl_seconds:600"
Firewall Integration and Security
DigitalOcean Cloud Firewalls
Your backend Droplets should only accept traffic from the load balancer, not directly from the internet. Use DigitalOcean Cloud Firewalls to enforce this:
# Create a firewall that only allows traffic from the load balancer
doctl compute firewall create \
--name web-app-fw \
--tag-names web-app \
--inbound-rules "protocol:tcp,ports:3000,load_balancer_uids:a1b2c3d4-e5f6-7890-abcd-ef1234567890" \
--inbound-rules "protocol:tcp,ports:22,address:your.office.ip/32" \
--outbound-rules "protocol:tcp,ports:all,address:0.0.0.0/0" \
--outbound-rules "protocol:udp,ports:all,address:0.0.0.0/0"
This configuration:
- Allows TCP on port 3000 only from the load balancer
- Allows SSH only from your office IP
- Allows all outbound traffic (for npm installs, API calls, etc.)
Rate Limiting at the Application Level
The load balancer does not provide rate limiting. Implement it in your Node.js application:
var rateLimit = require("express-rate-limit");
var limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window per IP
standardHeaders: true,
legacyHeaders: false,
keyGenerator: function(req) {
// Use X-Forwarded-For since we're behind a load balancer
return req.ip;
},
message: { error: "Too many requests, please try again later." }
});
app.use("/api/", limiter);
Remember to set app.set("trust proxy", true) so that req.ip returns the client IP from X-Forwarded-For, not the load balancer's IP. Without this, all requests appear to come from the same IP and your rate limiter will block everyone after 100 total requests.
Scaling Strategies
Horizontal Scaling with the API
The real power of load balancers is horizontal scaling. When traffic increases, you add servers. When it decreases, you remove them. Here is a Node.js script that monitors your load balancer and scales the backend pool:
var https = require("https");
var token = process.env.DIGITALOCEAN_TOKEN;
var lbId = process.env.LB_ID;
var tagName = "web-app";
var snapshotId = process.env.DROPLET_SNAPSHOT_ID;
var region = "nyc1";
var sshKeyFingerprint = process.env.SSH_KEY_FINGERPRINT;
function apiRequest(method, path, data, callback) {
var payload = data ? JSON.stringify(data) : null;
var options = {
hostname: "api.digitalocean.com",
path: path,
method: method,
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + token
}
};
if (payload) {
options.headers["Content-Length"] = Buffer.byteLength(payload);
}
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
callback(null, JSON.parse(body));
});
});
req.on("error", callback);
if (payload) req.write(payload);
req.end();
}
function getDropletCount(callback) {
apiRequest("GET", "/v2/load_balancers/" + lbId, null, function(err, data) {
if (err) return callback(err);
callback(null, data.load_balancer.droplet_ids.length);
});
}
function createDroplet(callback) {
var name = "web-" + Date.now();
var data = {
name: name,
region: region,
size: "s-2vcpu-4gb",
image: snapshotId,
ssh_keys: [sshKeyFingerprint],
private_networking: true,
tags: [tagName],
user_data: "#!/bin/bash\ncd /opt/app && pm2 start ecosystem.config.js"
};
apiRequest("POST", "/v2/droplets", data, function(err, result) {
if (err) return callback(err);
console.log("Created Droplet:", result.droplet.id, result.droplet.name);
callback(null, result.droplet);
});
}
function scaleUp() {
getDropletCount(function(err, count) {
if (err) {
console.error("Failed to get Droplet count:", err.message);
return;
}
console.log("Current backends:", count);
if (count >= 10) {
console.log("Already at maximum capacity (10 Droplets)");
return;
}
createDroplet(function(err, droplet) {
if (err) {
console.error("Failed to create Droplet:", err.message);
return;
}
console.log("Scaling up. New Droplet", droplet.id, "will join the pool via tag:", tagName);
});
});
}
// Run the scale-up
scaleUp();
Because the load balancer targets Droplets by tag, the new Droplet is added to the pool automatically as soon as it passes health checks. No manual intervention.
Zero-Downtime Deployments
For deployments without dropping connections, use a rolling update strategy:
#!/bin/bash
# rolling-deploy.sh
# Deploy to each backend one at a time
DROPLET_IDS=$(doctl compute load-balancer get $LB_ID --format DropletIDs --no-header)
IFS=',' read -ra DROPLETS <<< "$DROPLET_IDS"
for DROPLET_ID in "${DROPLETS[@]}"; do
DROPLET_IP=$(doctl compute droplet get $DROPLET_ID --format PrivateIPv4 --no-header)
echo "Deploying to Droplet $DROPLET_ID ($DROPLET_IP)..."
# Remove from load balancer
doctl compute load-balancer remove-droplets $LB_ID --droplet-ids $DROPLET_ID
echo "Removed from LB. Waiting for connections to drain..."
sleep 15
# Deploy the new code
ssh deploy@$DROPLET_IP "cd /opt/app && git pull origin master && npm install --production && pm2 reload ecosystem.config.js"
# Wait for the app to start
echo "Waiting for app to start..."
sleep 10
# Re-add to load balancer
doctl compute load-balancer add-droplets $LB_ID --droplet-ids $DROPLET_ID
echo "Re-added to LB. Waiting for health checks..."
sleep 30
echo "Droplet $DROPLET_ID deployed successfully."
done
echo "Rolling deployment complete."
This script removes one server at a time, deploys, verifies health, and re-adds it. As long as you have at least two backends, traffic is never interrupted.
Cost Analysis and Sizing
DigitalOcean offers load balancers in three sizes:
Size Connections/sec Monthly Cost Best For
───────────────────────────────────────────────────────────────────
Small 10,000 $12 Small apps, dev/staging
Medium 25,000 $24 Medium-traffic production
Large 50,000 $48 High-traffic, enterprise
Additional costs to factor in:
- Data transfer: The first 10 TB outbound per month is included. Beyond that, $0.01/GB.
- Backend Droplets: Each backend adds its own Droplet cost. Two s-2vcpu-4gb Droplets at $24/month each = $48/month.
- SSL certificates: Let's Encrypt certificates are free.
Total cost for a typical HA setup:
Component Monthly Cost
─────────────────────────────────────────────
Load Balancer (Small) $12
2x Droplets (s-2vcpu-4gb) $48
Managed MongoDB $15
Total $75/month
For $75/month, you get a fully redundant setup that survives the loss of any single Droplet. That is good value. Compare to AWS where an Application Load Balancer alone starts at $16.20/month plus $0.008 per LCU-hour, and the Droplet equivalents (t3.medium) run $30+ each.
Complete Working Example
Here is a complete, production-ready Node.js application designed to run behind a DigitalOcean Load Balancer with SSL termination, health checks, graceful shutdown, and monitoring.
Application Code
// server.js
var express = require("express");
var http = require("http");
var os = require("os");
var app = express();
var server = http.createServer(app);
var PORT = process.env.PORT || 3000;
var INSTANCE_ID = os.hostname();
var startTime = Date.now();
var requestCount = 0;
var isShuttingDown = false;
// Trust the load balancer proxy
app.set("trust proxy", true);
// Track request count
app.use(function(req, res, next) {
requestCount++;
next();
});
// Health check endpoint
app.get("/health", function(req, res) {
if (isShuttingDown) {
return res.status(503).json({ status: "shutting_down" });
}
res.status(200).json({
status: "healthy",
instance: INSTANCE_ID,
uptime: Math.floor((Date.now() - startTime) / 1000),
requests: requestCount,
memory: {
rss: Math.round(process.memoryUsage().rss / 1024 / 1024) + "MB",
heap: Math.round(process.memoryUsage().heapUsed / 1024 / 1024) + "MB"
}
});
});
// Main application routes
app.get("/", function(req, res) {
res.json({
message: "Hello from " + INSTANCE_ID,
clientIp: req.ip,
protocol: req.protocol
});
});
app.get("/api/data", function(req, res) {
// Simulate some work
var result = [];
for (var i = 0; i < 100; i++) {
result.push({ id: i, value: Math.random() });
}
res.json({ data: result, servedBy: INSTANCE_ID });
});
// Graceful shutdown handler
function gracefulShutdown(signal) {
console.log(signal + " received. Starting graceful shutdown...");
isShuttingDown = true;
// Stop accepting new connections
server.close(function() {
console.log("All connections closed. Exiting.");
process.exit(0);
});
// Force exit after 30 seconds
setTimeout(function() {
console.error("Forced shutdown after timeout.");
process.exit(1);
}, 30000);
}
process.on("SIGTERM", function() { gracefulShutdown("SIGTERM"); });
process.on("SIGINT", function() { gracefulShutdown("SIGINT"); });
server.listen(PORT, "0.0.0.0", function() {
console.log("Server " + INSTANCE_ID + " listening on port " + PORT);
});
PM2 Ecosystem Configuration
// ecosystem.config.js
module.exports = {
apps: [{
name: "web-app",
script: "server.js",
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "production",
PORT: 3000
},
max_memory_restart: "512M",
kill_timeout: 30000,
listen_timeout: 10000,
wait_ready: false
}]
};
Infrastructure Provisioning Script
#!/bin/bash
# provision-ha.sh
# Provision a complete HA infrastructure on DigitalOcean
set -e
REGION="nyc1"
TAG="web-app"
SIZE="s-2vcpu-4gb"
IMAGE="ubuntu-22-04-x64"
SSH_KEY=$(doctl compute ssh-key list --format FingerPrint --no-header | head -1)
echo "=== Creating Droplets ==="
for i in 1 2; do
doctl compute droplet create "web-${i}" \
--size $SIZE \
--image $IMAGE \
--region $REGION \
--enable-private-networking \
--tag-name $TAG \
--ssh-keys $SSH_KEY \
--user-data '#!/bin/bash
apt-get update && apt-get install -y nodejs npm
npm install -g pm2
mkdir -p /opt/app
' \
--wait
echo "Created web-${i}"
done
echo "=== Waiting for Droplets to initialize ==="
sleep 60
DROPLET_IDS=$(doctl compute droplet list --tag-name $TAG --format ID --no-header | tr '\n' ',' | sed 's/,$//')
echo "Droplet IDs: $DROPLET_IDS"
echo "=== Creating SSL Certificate ==="
doctl compute certificate create \
--name app-cert \
--type lets_encrypt \
--dns-names myapp.example.com
CERT_ID=$(doctl compute certificate list --format ID --no-header | head -1)
echo "Certificate ID: $CERT_ID"
echo "=== Creating Load Balancer ==="
doctl compute load-balancer create \
--name app-lb \
--region $REGION \
--tag-name $TAG \
--forwarding-rules "entry_protocol:https,entry_port:443,target_protocol:http,target_port:3000,certificate_id:${CERT_ID}" \
--forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000" \
--health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:3" \
--algorithm round_robin \
--redirect-http-to-https
echo "=== Creating Firewall ==="
LB_ID=$(doctl compute load-balancer list --format ID --no-header | head -1)
doctl compute firewall create \
--name app-firewall \
--tag-names $TAG \
--inbound-rules "protocol:tcp,ports:3000,load_balancer_uids:${LB_ID}" \
--inbound-rules "protocol:tcp,ports:22,address:0.0.0.0/0" \
--outbound-rules "protocol:tcp,ports:all,address:0.0.0.0/0" \
--outbound-rules "protocol:udp,ports:all,address:0.0.0.0/0"
echo "=== Done ==="
LB_IP=$(doctl compute load-balancer get $LB_ID --format IP --no-header)
echo "Load Balancer IP: $LB_IP"
echo "Point your DNS A record to: $LB_IP"
echo ""
echo "Total monthly cost estimate:"
echo " Load Balancer (Small): \$12"
echo " 2x Droplets ($SIZE): \$48"
echo " Total: \$60/month"
Testing the Setup
Once everything is provisioned, verify the load balancer is distributing traffic:
# Hit the endpoint 10 times and see which instance responds
for i in $(seq 1 10); do
curl -s https://myapp.example.com/ | jq -r '.message'
done
Expected output (round robin):
Hello from web-1
Hello from web-2
Hello from web-1
Hello from web-2
Hello from web-1
Hello from web-2
Hello from web-1
Hello from web-2
Hello from web-1
Hello from web-2
Verify health checks:
curl -s https://myapp.example.com/health | jq .
{
"status": "healthy",
"instance": "web-1",
"uptime": 3842,
"requests": 156,
"memory": {
"rss": "48MB",
"heap": "22MB"
}
}
Common Issues & Troubleshooting
1. Health Checks Failing Immediately
Symptom: Load balancer shows 0 healthy backends. All Droplets marked as unhealthy.
doctl compute load-balancer get $LB_ID --format Status
# Output: active (but 0/2 backends healthy)
Cause: The health check is targeting the wrong port or path. Your Node.js app listens on port 3000, but the health check is configured for port 80. Or the health check path returns a non-200 status code.
Fix: Verify your health check configuration matches your application:
# Check the health check config
doctl compute load-balancer get $LB_ID --format HealthCheck
# Test the health endpoint directly on the Droplet
ssh deploy@droplet-ip "curl -s http://localhost:3000/health"
2. 502 Bad Gateway Errors
Symptom: Users see 502 Bad Gateway errors intermittently.
HTTP/1.1 502 Bad Gateway
Server: nginx
Content-Type: text/html
Cause: The backend application is crashing or restarting. The load balancer forwards a request to a backend that accepted the connection but died before sending a response. This commonly happens during deployments without graceful shutdown.
Fix: Implement graceful shutdown in your Node.js app (see the working example above). Use PM2 with kill_timeout to give in-flight requests time to complete:
// ecosystem.config.js
module.exports = {
apps: [{
name: "web-app",
script: "server.js",
kill_timeout: 30000 // 30 seconds for graceful shutdown
}]
};
3. All Traffic Going to One Backend
Symptom: One Droplet handles 95% of traffic while the other sits idle.
Cause: Sticky sessions are enabled and most traffic comes from a small number of clients (or a CDN/proxy that appears as a single client). The sticky session cookie routes all their requests to the same backend.
Fix: If you do not need sticky sessions, disable them:
doctl compute load-balancer update $LB_ID \
--sticky-sessions "type:none"
If you do need sticky sessions, reduce the TTL so clients get redistributed more frequently:
doctl compute load-balancer update $LB_ID \
--sticky-sessions "type:cookies,cookie_name:DO-LB-COOKIE,cookie_ttl_seconds:60"
4. WebSocket Connections Dropping After 60 Seconds
Symptom: WebSocket connections close exactly 60 seconds after the last message.
WebSocket connection to 'wss://myapp.example.com/ws' failed:
Error during WebSocket handshake: net::ERR_CONNECTION_CLOSED
Cause: The load balancer's idle connection timeout is 60 seconds. If no data flows through the connection for 60 seconds, the load balancer terminates it.
Fix: Implement ping/pong keepalives at intervals shorter than 60 seconds:
// Send a ping every 30 seconds (well under the 60-second timeout)
setInterval(function() {
wss.clients.forEach(function(ws) {
if (ws.readyState === WebSocket.OPEN) {
ws.ping();
}
});
}, 30000);
5. Client IP Address Shows Load Balancer IP
Symptom: All request logs show the same IP address (the load balancer's private IP like 10.132.0.x).
Fix: Set trust proxy in Express and use req.ip:
app.set("trust proxy", true);
app.use(function(req, res, next) {
// req.ip now correctly shows the client IP
console.log("Request from:", req.ip);
next();
});
6. SSL Certificate Not Attaching
Symptom: doctl compute load-balancer create succeeds but HTTPS returns ERR_SSL_PROTOCOL_ERROR.
Cause: The certificate is still in pending state. Let's Encrypt certificates require DNS validation, which means your domain must have an A record pointing to the load balancer IP before the certificate can be issued.
Fix: Check the certificate status and ensure DNS is configured:
doctl compute certificate list --format ID,Name,State
# If state is "pending", verify DNS
dig +short myapp.example.com
# Should return the load balancer IP
Create the A record, wait for DNS propagation (typically 1-5 minutes with DigitalOcean DNS), and the certificate will automatically transition to verified.
Best Practices
Always use tag-based targeting instead of hardcoded Droplet IDs. Tags make scaling dynamic and eliminate manual load balancer reconfiguration when you add or remove servers.
Implement a dedicated
/healthendpoint that checks downstream dependencies (database, cache, external APIs). A process that is running but cannot serve useful responses should return 503, not 200.Set
trust proxyin Express when running behind a load balancer. Without it,req.ip,req.protocol, andreq.secureall return incorrect values, breaking rate limiting, logging, and HTTPS redirect logic.Use graceful shutdown handlers for SIGTERM and SIGINT. When the load balancer removes a backend or you deploy new code, in-flight requests need time to complete. A 30-second shutdown window handles the vast majority of cases.
Never expose backend ports directly to the internet. Use DigitalOcean Cloud Firewalls to restrict port 3000 (or whatever your app listens on) to traffic from the load balancer only. SSH should be limited to your office or VPN IP.
Monitor backend count, not just load balancer status. A load balancer can show "active" status with zero healthy backends. Alert when the number of healthy backends drops below your minimum threshold.
Start with the Small load balancer ($12/month) and upgrade when metrics show you are approaching the 10,000 connections/second limit. Most Node.js applications will never hit this threshold.
Prefer SSL termination at the load balancer over TLS passthrough. Centralized certificate management is simpler, Let's Encrypt auto-renewal eliminates manual work, and your backend servers avoid the CPU overhead of TLS encryption.
Use rolling deployments with at least two backends. Remove one from the pool, deploy, verify health, re-add, then repeat. This gives you zero-downtime deployments without any specialized tooling.
Keep health check intervals at 10 seconds as a baseline. Shorter intervals mean faster failover but more health check traffic. For most applications, detecting a failure within 30 seconds (3 failed checks) is perfectly acceptable.
