DigitalOcean Droplet Optimization for Node.js
A practical guide to optimizing DigitalOcean Droplets for Node.js applications covering server setup, PM2 process management, Nginx reverse proxy, security hardening, and monitoring.
DigitalOcean Droplet Optimization for Node.js
A Droplet is a virtual private server. Unlike App Platform where DigitalOcean manages everything, a Droplet gives you full control — and full responsibility. You choose the OS, install Node.js, configure the web server, manage SSL certificates, set up process management, and handle security.
This control is worth it when you need custom configurations, run background workers alongside your web server, need specific OS-level packages, or want to minimize costs for predictable workloads. This guide covers every step from bare Droplet to production-ready Node.js server.
Prerequisites
- A DigitalOcean account
- Basic Linux command line knowledge
- An Express.js application ready to deploy
- A domain name (for SSL)
Choosing the Right Droplet
| Droplet | vCPU | RAM | Storage | Monthly | Best For |
|---|---|---|---|---|---|
| Basic $6 | 1 | 1GB | 25GB SSD | $6 | Small apps, side projects |
| Basic $12 | 1 | 2GB | 50GB SSD | $12 | Medium traffic apps |
| Basic $24 | 2 | 4GB | 80GB SSD | $24 | Production workloads |
| CPU-Optimized | 2 | 4GB | 25GB NVMe | $42 | CPU-intensive processing |
| Memory-Optimized | 2 | 16GB | 50GB NVMe | $84 | Large datasets, caching |
For most Node.js applications, a Basic $12 Droplet (2GB RAM) handles thousands of requests per second. Node.js is single-threaded for JavaScript execution but uses the event loop efficiently for I/O-bound work.
Initial Server Setup
Creating the Droplet
doctl compute droplet create my-app \
--image ubuntu-22-04-x64 \
--size s-1vcpu-2gb \
--region nyc3 \
--ssh-keys YOUR_SSH_KEY_ID
First Login and Security
ssh root@YOUR_DROPLET_IP
# Create a non-root user
adduser deploy
usermod -aG sudo deploy
# Copy SSH keys to the new user
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy
# Test login with new user (in a new terminal)
ssh deploy@YOUR_DROPLET_IP
# Disable root SSH login
sudo sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart sshd
Firewall Configuration
# Allow SSH, HTTP, and HTTPS
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
# Verify
sudo ufw status
Automatic Security Updates
sudo apt update && sudo apt upgrade -y
sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure -plow unattended-upgrades
Installing Node.js
Use NodeSource for the latest LTS version:
# Install Node.js 20 LTS
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify
node --version
npm --version
Deploying Your Application
Application Directory
sudo mkdir -p /var/www/myapp
sudo chown deploy:deploy /var/www/myapp
Clone and Install
cd /var/www/myapp
git clone https://github.com/your-username/your-repo.git .
npm install --production
Environment Variables
# Create environment file
sudo nano /var/www/myapp/.env
NODE_ENV=production
PORT=3000
DATABASE_URL=postgresql://user:pass@localhost:5432/myapp
SESSION_SECRET=your-secret-here
# Restrict permissions
chmod 600 /var/www/myapp/.env
PM2: Process Management
PM2 keeps your Node.js application running, restarts it if it crashes, and manages multiple processes.
Installation
sudo npm install -g pm2
Starting Your Application
cd /var/www/myapp
pm2 start server.js --name myapp
PM2 Ecosystem File
For more control, create an ecosystem file:
// ecosystem.config.js
module.exports = {
apps: [{
name: "myapp",
script: "server.js",
cwd: "/var/www/myapp",
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "production",
PORT: 3000
},
max_memory_restart: "500M",
error_file: "/var/log/pm2/myapp-error.log",
out_file: "/var/log/pm2/myapp-out.log",
merge_logs: true,
log_date_format: "YYYY-MM-DD HH:mm:ss Z"
}]
};
pm2 start ecosystem.config.js
Key options:
- instances: "max" — runs one process per CPU core (cluster mode)
- exec_mode: "cluster" — enables Node.js cluster module for load balancing
- max_memory_restart — restarts the process if it exceeds memory limit
PM2 Commands
pm2 status # Show all processes
pm2 logs myapp # View logs
pm2 restart myapp # Restart the application
pm2 stop myapp # Stop the application
pm2 delete myapp # Remove from PM2
pm2 monit # Real-time monitoring dashboard
pm2 reload myapp # Zero-downtime restart
PM2 Startup Script
Ensure PM2 starts on boot:
pm2 startup systemd
# Run the command PM2 outputs, e.g.:
sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u deploy --hp /home/deploy
# Save the current process list
pm2 save
Nginx Reverse Proxy
Nginx sits in front of Node.js, handling SSL termination, static files, compression, and load balancing.
Installation
sudo apt install nginx -y
Basic Configuration
# /etc/nginx/sites-available/myapp
server {
listen 80;
server_name myapp.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl restart nginx
SSL with Let's Encrypt
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d myapp.example.com
Certbot automatically configures Nginx for HTTPS and sets up certificate renewal.
Optimized Nginx Configuration
# /etc/nginx/sites-available/myapp
upstream nodejs {
server localhost:3000;
keepalive 64;
}
server {
listen 80;
server_name myapp.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name myapp.example.com;
ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
# SSL optimization
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.3;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml;
# Static files — served directly by Nginx
location /static/ {
alias /var/www/myapp/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
# API and dynamic routes — proxied to Node.js
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
}
Node.js Performance Tuning
Memory Limits
# Check default memory limit
node -e "console.log(require('v8').getHeapStatistics().heap_size_limit / 1024 / 1024 + 'MB')"
# Increase for 2GB Droplet
pm2 start server.js --node-args="--max-old-space-size=1536"
Cluster Mode
PM2's cluster mode runs multiple Node.js processes:
// ecosystem.config.js
module.exports = {
apps: [{
name: "myapp",
script: "server.js",
instances: 2, // 2 workers for a 2-core Droplet
exec_mode: "cluster",
max_memory_restart: "700M"
}]
};
Each worker handles requests independently. PM2 distributes traffic across workers.
Connection Pooling
// db.js — pool connections for efficiency
var { Pool } = require("pg");
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections in the pool
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000
});
module.exports = pool;
Monitoring
PM2 Monitoring
# Real-time dashboard
pm2 monit
# Process details
pm2 show myapp
# Memory and CPU usage
pm2 status
System Monitoring with htop
sudo apt install htop -y
htop
Log Rotation
# PM2 log rotation
pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 7
pm2 set pm2-logrotate:compress true
Simple Uptime Monitor
// monitor.js — run as a separate PM2 process
var http = require("http");
function checkHealth() {
http.get("http://localhost:3000/health", function(res) {
if (res.statusCode !== 200) {
console.error("Health check failed: status " + res.statusCode);
// Alert via webhook, email, etc.
}
}).on("error", function(err) {
console.error("Health check error: " + err.message);
// Alert the team
});
}
setInterval(checkHealth, 60000); // Check every minute
Deployment Automation
Deploy Script
#!/bin/bash
# deploy.sh — run on the server
set -e
APP_DIR=/var/www/myapp
echo "Pulling latest code..."
cd $APP_DIR
git pull origin main
echo "Installing dependencies..."
npm install --production
echo "Running migrations..."
npm run db:migrate
echo "Restarting application..."
pm2 reload myapp
echo "Deployment complete"
Git Hook Deployment
On the server, create a bare Git repository with a post-receive hook:
# On the server
mkdir -p /var/repo/myapp.git
cd /var/repo/myapp.git
git init --bare
# /var/repo/myapp.git/hooks/post-receive
#!/bin/bash
APP_DIR=/var/www/myapp
git --work-tree=$APP_DIR --git-dir=/var/repo/myapp.git checkout -f
cd $APP_DIR
npm install --production
pm2 reload myapp
chmod +x /var/repo/myapp.git/hooks/post-receive
From your local machine:
git remote add production deploy@YOUR_IP:/var/repo/myapp.git
git push production main
Backups
Automated Droplet Backups
Enable weekly backups in the Droplet settings ($1/month for the $6 Droplet).
Database Backups
#!/bin/bash
# backup-db.sh
BACKUP_DIR=/var/backups/postgres
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
pg_dump myapp > $BACKUP_DIR/myapp_$DATE.sql
gzip $BACKUP_DIR/myapp_$DATE.sql
# Keep only last 7 days
find $BACKUP_DIR -type f -mtime +7 -delete
# Run daily at 3 AM
crontab -e
0 3 * * * /var/www/myapp/scripts/backup-db.sh
Common Issues and Troubleshooting
Application works locally but crashes on the Droplet
Missing environment variables or Node.js version mismatch:
Fix: Verify all required environment variables are set. Check the Node.js version matches your local version. Review PM2 error logs with pm2 logs myapp --err.
Nginx returns 502 Bad Gateway
Node.js is not running or is listening on the wrong port:
Fix: Check PM2 status with pm2 status. Verify the application is listening on the port Nginx is proxying to. Check Nginx error logs at /var/log/nginx/error.log.
SSL certificate fails to renew
Certbot renewal requires port 80 to be accessible:
Fix: Ensure port 80 is open in the firewall. Check that Nginx is running and the server block for port 80 exists. Test renewal with sudo certbot renew --dry-run.
Server runs out of memory
Node.js or PM2 processes consume too much RAM:
Fix: Set max_memory_restart in PM2 to restart processes that leak memory. Reduce the number of cluster instances. Enable swap space as a safety net: sudo fallocate -l 1G /swapfile && sudo mkswap /swapfile && sudo swapon /swapfile.
Best Practices
- Never run Node.js as root. Create a dedicated user for your application. Use
sudoonly for system-level tasks. - Use PM2 in cluster mode. Multi-core Droplets should run multiple Node.js processes. PM2 cluster mode maximizes CPU utilization without code changes.
- Let Nginx serve static files. Nginx serves static assets orders of magnitude faster than Express. Configure a separate
locationblock for static files. - Enable gzip compression in Nginx. Compressing JSON responses and HTML reduces bandwidth and improves response times significantly.
- Set up automated deployments. Manual SSH-and-pull deployments are error-prone. Use Git hooks, GitHub Actions, or a deployment script that handles the full process.
- Monitor memory and restart on leaks. Node.js applications can leak memory. PM2's
max_memory_restartcatches leaks before they crash the server. - Back up your database daily. Droplet backups are weekly and capture the whole server. Database-specific backups are faster to restore and can run daily.
- Keep the system updated. Enable unattended security updates. Schedule regular maintenance windows for major upgrades.