Docker Compose for Local Development Environments
A step-by-step guide to using Docker Compose for Node.js local development, covering multi-service setups with PostgreSQL, MongoDB, Redis, hot reload, health checks, and production parity.
Docker Compose for Local Development Environments
Overview
Docker Compose lets you define and run multi-container applications from a single YAML file, turning complex local development setups into a single docker compose up command. For Node.js projects that depend on PostgreSQL, MongoDB, Redis, or any other service, Compose eliminates the "works on my machine" problem by giving every developer an identical, isolated environment that starts in seconds. If you are still asking teammates to install and configure databases locally, or maintaining a shared dev server that everyone fights over, Compose solves that.
Prerequisites
- Docker Desktop installed (Mac, Windows, or Linux) — Docker Compose v2 is included
- Basic familiarity with Docker concepts (images, containers, volumes)
- Node.js and npm installed locally (for editing code; the app runs inside the container)
- A terminal you are comfortable with
Why Docker Compose for Local Development
Consistency Across the Team
Every developer on your team gets the same PostgreSQL 16, the same Redis 7, the same Node.js 20 — regardless of whether they are on Mac, Windows, or Linux. No more debugging issues that only happen on one person's machine because they are running PostgreSQL 14 while everyone else is on 16. The docker-compose.yml file is checked into source control, and it is the single source of truth for what your development environment looks like.
Onboarding in Minutes
I have worked on projects where onboarding a new developer took two full days. Install PostgreSQL, configure it, create the databases, run migrations, install Redis, install MongoDB, configure environment variables, hope nothing conflicts with another project. With Compose, onboarding looks like this:
git clone https://github.com/yourorg/yourproject.git
cd yourproject
cp .env.example .env
docker compose up
That is it. The new developer has a fully running application with all services in about 60 seconds.
Isolation Between Projects
Without containers, running two Node.js projects that need different PostgreSQL versions, or two projects that both want port 5432, is a constant headache. Compose gives each project its own isolated network and services. Project A runs PostgreSQL 14 on its internal network. Project B runs PostgreSQL 16 on its own. They never see each other.
Compose File Basics
A docker-compose.yml file defines three main things: services (the containers you want to run), networks (how those containers communicate), and volumes (where data persists).
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Each key under services becomes a container. Compose automatically creates a default network and adds all services to it, so app can reach db by hostname. The volumes section at the bottom declares named volumes that persist data across container restarts.
A few things to know about the compose file:
build: .tells Compose to build from the Dockerfile in the current directory.image: postgres:16-alpinetells Compose to pull an existing image from Docker Hub.ports: "3000:3000"maps host port to container port. Format ishost:container.depends_oncontrols startup order (but does not wait for readiness — more on that later).
Building a Node.js App Service with Hot Reload
The whole point of running your app in Docker during development is to get the isolation and consistency benefits without sacrificing the fast feedback loop you get from editing code locally. The key is bind mounts combined with nodemon.
A bind mount maps a directory on your host machine directly into the container. When you edit a file on your host, the change is immediately visible inside the container. Nodemon watches for those changes and restarts your Node.js process.
Here is the Dockerfile for development:
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npx", "nodemon", "app.js"]
And the compose file:
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npx nodemon app.js
There are two volume entries here, and both matter:
./:/app— Bind mount. Maps your entire project directory into/appin the container. Code changes on your host are immediately reflected./app/node_modules— Anonymous volume. This prevents your host'snode_modulesfrom overwriting the container'snode_modules. The container installed dependencies withnpm ciduring the build, and those are the correct versions for the container's OS (Alpine Linux). Your host'snode_modulesmight have been built for Mac or Windows and would cause crashes.
This is a pattern you will use on every project. The anonymous volume for node_modules is non-negotiable.
Adding PostgreSQL as a Service
PostgreSQL is the most common database I see in production Node.js applications, and Compose makes it trivial to run locally.
services:
db:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpassword
POSTGRES_DB: myapp_dev
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
volumes:
pgdata:
Key details:
POSTGRES_USER,POSTGRES_PASSWORD,POSTGRES_DB— These environment variables are read by the official PostgreSQL image on first startup to create the user and database automatically.pgdatavolume — Named volume that persists your database data acrossdocker compose downanddocker compose upcycles. Without this, you lose all data every time.init.sqlmount — Any.sqlfiles placed in/docker-entrypoint-initdb.d/are executed on first initialization. This is where you put your schema creation or seed data.
From your Node.js application, connect using the service name as the hostname:
var { Pool } = require('pg');
var pool = new Pool({
host: process.env.DB_HOST || 'db',
port: parseInt(process.env.DB_PORT) || 5432,
user: process.env.DB_USER || 'devuser',
password: process.env.DB_PASSWORD || 'devpassword',
database: process.env.DB_NAME || 'myapp_dev'
});
pool.query('SELECT NOW()', function(err, result) {
if (err) {
console.error('Database connection failed:', err.message);
return;
}
console.log('Connected to PostgreSQL at:', result.rows[0].now);
});
Notice that the hostname is db — the service name from the compose file. Docker's internal DNS resolves this to the PostgreSQL container's IP address on the shared network. You never use localhost when connecting between containers.
Adding MongoDB as a Service
services:
mongo:
image: mongo:7
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: rootpassword
MONGO_INITDB_DATABASE: myapp_dev
volumes:
- mongodata:/data/db
volumes:
mongodata:
Connecting from Node.js:
var mongoose = require('mongoose');
var mongoUri = process.env.MONGO_URI || 'mongodb://root:rootpassword@mongo:27017/myapp_dev?authSource=admin';
mongoose.connect(mongoUri, function(err) {
if (err) {
console.error('MongoDB connection error:', err.message);
return;
}
console.log('Connected to MongoDB');
});
Again, the hostname is the service name: mongo. The authSource=admin query parameter is required when using root credentials, because the root user is created in the admin database.
Adding Redis for Caching
Redis is the easiest service to add. It requires almost no configuration for development.
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redisdata:/data
command: redis-server --appendonly yes
volumes:
redisdata:
The --appendonly yes flag enables persistence so your Redis data survives restarts. In development, this means your cached data and session stores stick around.
Connecting from Node.js:
var Redis = require('ioredis');
var redis = new Redis({
host: process.env.REDIS_HOST || 'redis',
port: parseInt(process.env.REDIS_PORT) || 6379
});
redis.on('connect', function() {
console.log('Connected to Redis');
});
redis.on('error', function(err) {
console.error('Redis connection error:', err.message);
});
// Simple caching pattern
function getCachedData(key, fetchFunction, ttl, callback) {
redis.get(key, function(err, cached) {
if (cached) {
return callback(null, JSON.parse(cached));
}
fetchFunction(function(err, data) {
if (err) return callback(err);
redis.setex(key, ttl, JSON.stringify(data));
callback(null, data);
});
});
}
Environment Variables and .env Files
Hardcoding passwords in your compose file works for getting started, but you should pull sensitive values into a .env file. Compose automatically reads a .env file in the same directory as the compose file.
Create a .env file:
# .env (gitignored - never commit this)
POSTGRES_USER=devuser
POSTGRES_PASSWORD=devpassword
POSTGRES_DB=myapp_dev
MONGO_ROOT_USER=root
MONGO_ROOT_PASSWORD=rootpassword
REDIS_PORT=6379
NODE_ENV=development
PORT=3000
Reference the variables in your compose file:
services:
app:
build: .
ports:
- "${PORT}:${PORT}"
environment:
- NODE_ENV=${NODE_ENV}
- DB_HOST=db
- DB_USER=${POSTGRES_USER}
- DB_PASSWORD=${POSTGRES_PASSWORD}
- DB_NAME=${POSTGRES_DB}
- REDIS_HOST=redis
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
Create a .env.example that you do check into source control:
# .env.example (committed to source control)
POSTGRES_USER=devuser
POSTGRES_PASSWORD=devpassword
POSTGRES_DB=myapp_dev
MONGO_ROOT_USER=root
MONGO_ROOT_PASSWORD=changeme
REDIS_PORT=6379
NODE_ENV=development
PORT=3000
Add .env to your .gitignore. This way, every developer copies .env.example to .env and can customize values without affecting anyone else.
Health Checks and Dependency Ordering
The depends_on directive controls startup order, but by default it only waits for the container to start — not for the service inside to be ready. PostgreSQL takes a few seconds to initialize, and if your Node.js app tries to connect immediately, it will crash.
The fix is depends_on with a condition combined with health checks:
services:
app:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
mongo:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpassword
POSTGRES_DB: myapp_dev
healthcheck:
test: ["CMD-SHELL", "pg_isready -U devuser -d myapp_dev"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
mongo:
image: mongo:7
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh --quiet
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
With condition: service_healthy, Compose waits until the health check passes before starting the dependent service. Your Node.js app will not start until PostgreSQL, MongoDB, and Redis are all genuinely ready to accept connections.
The start_period is important for databases — it gives them time to initialize before health check failures count against the retry limit.
Named Volumes for Data Persistence
Named volumes persist data across docker compose down and docker compose up cycles. They are managed by Docker and stored in Docker's internal storage.
volumes:
pgdata:
driver: local
mongodata:
driver: local
redisdata:
driver: local
To see your volumes:
$ docker volume ls
DRIVER VOLUME NAME
local myproject_pgdata
local myproject_mongodata
local myproject_redisdata
To completely reset a database and start fresh:
docker compose down -v
The -v flag removes named volumes. Without it, docker compose down stops and removes containers but preserves your data. This is an important distinction — running down -v when you did not mean to will wipe your local database.
Custom Networks for Service Isolation
Compose creates a default network for all services in a compose file, but you can define custom networks to isolate groups of services.
services:
app:
networks:
- frontend
- backend
db:
networks:
- backend
redis:
networks:
- backend
nginx:
networks:
- frontend
networks:
frontend:
backend:
In this setup, nginx can talk to app (both on frontend), and app can talk to db and redis (all on backend). But nginx cannot directly reach db or redis — they are not on the same network. This mirrors a real production topology where your reverse proxy should never have direct database access.
Development vs Production Compose Files
Docker Compose supports override files. When you run docker compose up, Compose automatically merges docker-compose.yml with docker-compose.override.yml if it exists. This is the standard pattern for separating development and production concerns.
docker-compose.yml — Base configuration shared across all environments:
services:
app:
build: .
environment:
- DB_HOST=db
- REDIS_HOST=redis
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
docker-compose.override.yml — Development-specific overrides (auto-merged):
services:
app:
ports:
- "3000:3000"
- "9229:9229"
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npx nodemon --inspect=0.0.0.0:9229 app.js
db:
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
ports:
- "6379:6379"
volumes:
pgdata:
docker-compose.prod.yml — Production overrides (explicitly specified):
services:
app:
ports:
- "3000:3000"
environment:
- NODE_ENV=production
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
db:
volumes:
- pgdata:/var/lib/postgresql/data
restart: unless-stopped
redis:
restart: unless-stopped
volumes:
pgdata:
Usage:
# Development (auto-merges docker-compose.override.yml)
docker compose up
# Production (explicitly specify the prod file)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
This pattern keeps your base compose file clean and environment-specific concerns separated.
Debugging Inside Containers
Executing Commands
Use docker compose exec to run commands inside a running container:
# Open a shell in the app container
docker compose exec app sh
# Run a one-off Node.js script
docker compose exec app node scripts/seed.js
# Connect to PostgreSQL directly
docker compose exec db psql -U devuser -d myapp_dev
# Check Redis keys
docker compose exec redis redis-cli KEYS '*'
Viewing Logs
# All services
docker compose logs
# Specific service, follow mode
docker compose logs -f app
# Last 100 lines from all services
docker compose logs --tail=100
# Timestamps
docker compose logs -t app
Node.js Debugging with Inspector
The compose file exposes port 9229 and starts nodemon with --inspect=0.0.0.0:9229. You can attach VS Code's debugger by adding this launch configuration:
{
"name": "Docker: Attach to Node",
"type": "node",
"request": "attach",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app",
"restart": true
}
The restart: true setting tells VS Code to automatically reconnect when nodemon restarts the process.
Performance Tips
Volume Performance on Mac and Windows
Docker on Mac and Windows runs containers inside a Linux VM. File system access between the host and the VM goes through a virtualization layer, which introduces latency. On large Node.js projects, this can make npm install or file-watching noticeably slow.
There are several strategies to mitigate this:
1. Use targeted bind mounts instead of mounting the entire project:
volumes:
- ./src:/app/src
- ./package.json:/app/package.json
- /app/node_modules
Only mount what changes. Do not mount node_modules, build output, or anything that the container writes to heavily.
2. Use Docker's VirtioFS file sharing (Mac):
In Docker Desktop settings, go to General and select VirtioFS as the file sharing implementation. This is significantly faster than the older gRPC-FUSE and osxfs options. On a project with 2,000 source files, I measured a 3x improvement in nodemon restart times after switching.
3. Use WSL 2 backend on Windows:
If you are on Windows, make sure Docker Desktop is configured to use the WSL 2 backend, and keep your project files inside the WSL 2 filesystem (/home/user/projects/) rather than on a Windows mount (/mnt/c/Users/...). Accessing files across the Windows/WSL boundary is dramatically slower.
# Slow: project on Windows filesystem accessed through WSL
cd /mnt/c/Users/shane/projects/myapp
# Fast: project inside WSL filesystem
cd ~/projects/myapp
The difference can be 5-10x in file I/O operations.
Common Compose Commands Cheat Sheet
# Start all services (foreground)
docker compose up
# Start all services (detached/background)
docker compose up -d
# Start and rebuild images
docker compose up --build
# Stop all services
docker compose down
# Stop and remove volumes (resets databases)
docker compose down -v
# Restart a single service
docker compose restart app
# View running services
docker compose ps
# View logs
docker compose logs -f app
# Execute a command in a running container
docker compose exec app sh
# Run a one-off command (starts a new container)
docker compose run --rm app npm test
# Rebuild a single service without cache
docker compose build --no-cache app
# Pull latest images for all services
docker compose pull
# View resource usage
docker compose top
The difference between exec and run matters: exec runs a command inside an already-running container. run starts a new container for the command. Use exec for interactive debugging. Use run for one-off tasks like running tests or migrations.
Complete Working Example
Here is a full, copy-paste-ready setup for a Node.js Express application with PostgreSQL, MongoDB, and Redis. This is the kind of setup I use on real projects.
Project Structure
myapp/
├── app.js
├── package.json
├── Dockerfile
├── docker-compose.yml
├── docker-compose.override.yml
├── .dockerignore
├── .env.example
├── .gitignore
├── scripts/
│ └── wait-for-services.js
├── db/
│ └── init.sql
└── routes/
└── index.js
app.js
var express = require('express');
var { Pool } = require('pg');
var mongoose = require('mongoose');
var Redis = require('ioredis');
var app = express();
var port = process.env.PORT || 3000;
// PostgreSQL connection
var pgPool = new Pool({
host: process.env.DB_HOST || 'db',
port: parseInt(process.env.DB_PORT) || 5432,
user: process.env.DB_USER || 'devuser',
password: process.env.DB_PASSWORD || 'devpassword',
database: process.env.DB_NAME || 'myapp_dev'
});
// MongoDB connection
var mongoUri = process.env.MONGO_URI || 'mongodb://root:rootpassword@mongo:27017/myapp_dev?authSource=admin';
mongoose.connect(mongoUri);
// Redis connection
var redis = new Redis({
host: process.env.REDIS_HOST || 'redis',
port: parseInt(process.env.REDIS_PORT) || 6379
});
app.use(express.json());
// Health check endpoint
app.get('/health', function(req, res) {
var checks = {
uptime: process.uptime(),
timestamp: Date.now(),
postgres: 'checking',
mongo: 'checking',
redis: 'checking'
};
pgPool.query('SELECT 1', function(pgErr) {
checks.postgres = pgErr ? 'unhealthy' : 'healthy';
redis.ping(function(redisErr) {
checks.redis = redisErr ? 'unhealthy' : 'healthy';
var mongoState = mongoose.connection.readyState;
checks.mongo = mongoState === 1 ? 'healthy' : 'unhealthy';
var allHealthy = checks.postgres === 'healthy' &&
checks.mongo === 'healthy' &&
checks.redis === 'healthy';
res.status(allHealthy ? 200 : 503).json(checks);
});
});
});
// Example route with Redis caching
app.get('/api/users', function(req, res) {
var cacheKey = 'users:all';
redis.get(cacheKey, function(err, cached) {
if (cached) {
console.log('Cache hit for', cacheKey);
return res.json(JSON.parse(cached));
}
pgPool.query('SELECT id, name, email FROM users ORDER BY id', function(err, result) {
if (err) {
console.error('Query error:', err.message);
return res.status(500).json({ error: 'Database error' });
}
// Cache for 60 seconds
redis.setex(cacheKey, 60, JSON.stringify(result.rows));
console.log('Cache miss for', cacheKey, '- fetched from database');
res.json(result.rows);
});
});
});
app.get('/', function(req, res) {
res.json({
message: 'API is running',
environment: process.env.NODE_ENV || 'development'
});
});
var server = app.listen(port, function() {
console.log('Server running on port ' + port);
console.log('Environment: ' + (process.env.NODE_ENV || 'development'));
});
// Graceful shutdown
process.on('SIGTERM', function() {
console.log('SIGTERM received. Shutting down gracefully...');
server.close(function() {
pgPool.end();
mongoose.disconnect();
redis.disconnect();
console.log('All connections closed');
process.exit(0);
});
});
package.json
{
"name": "myapp",
"version": "1.0.0",
"description": "Express app with PostgreSQL, MongoDB, and Redis",
"main": "app.js",
"scripts": {
"start": "node app.js",
"dev": "nodemon app.js",
"test": "jest"
},
"dependencies": {
"express": "^4.18.2",
"ioredis": "^5.3.2",
"mongoose": "^7.6.0",
"pg": "^8.11.3"
},
"devDependencies": {
"jest": "^29.7.0",
"nodemon": "^3.0.2"
}
}
Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
docker-compose.yml
services:
app:
build: .
environment:
- DB_HOST=db
- DB_PORT=5432
- DB_USER=${POSTGRES_USER:-devuser}
- DB_PASSWORD=${POSTGRES_PASSWORD:-devpassword}
- DB_NAME=${POSTGRES_DB:-myapp_dev}
- MONGO_URI=mongodb://${MONGO_ROOT_USER:-root}:${MONGO_ROOT_PASSWORD:-rootpassword}@mongo:27017/myapp_dev?authSource=admin
- REDIS_HOST=redis
- REDIS_PORT=6379
depends_on:
db:
condition: service_healthy
mongo:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER:-devuser}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-devpassword}
POSTGRES_DB: ${POSTGRES_DB:-myapp_dev}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-devuser} -d ${POSTGRES_DB:-myapp_dev}"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
mongo:
image: mongo:7
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USER:-root}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD:-rootpassword}
MONGO_INITDB_DATABASE: myapp_dev
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh --quiet
interval: 10s
timeout: 5s
retries: 5
start_period: 15s
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
volumes:
pgdata:
mongodata:
redisdata:
docker-compose.override.yml
services:
app:
ports:
- "3000:3000"
- "9229:9229"
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npx nodemon --inspect=0.0.0.0:9229 app.js
db:
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
mongo:
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
redis:
ports:
- "6379:6379"
volumes:
- redisdata:/data
.dockerignore
node_modules
.git
.gitignore
.env
.env.*
!.env.example
coverage/
__tests__/
*.test.js
*.spec.js
jest.config.*
.eslintrc*
.prettierrc*
Dockerfile*
docker-compose*
.dockerignore
README.md
CHANGELOG.md
LICENSE
docs/
.DS_Store
Thumbs.db
.vscode
.idea
.env.example
# Database - PostgreSQL
POSTGRES_USER=devuser
POSTGRES_PASSWORD=devpassword
POSTGRES_DB=myapp_dev
# Database - MongoDB
MONGO_ROOT_USER=root
MONGO_ROOT_PASSWORD=rootpassword
# Redis
REDIS_PORT=6379
# Application
NODE_ENV=development
PORT=3000
db/init.sql
-- Create tables on first PostgreSQL initialization
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Seed some development data
INSERT INTO users (name, email) VALUES
('Alice Johnson', '[email protected]'),
('Bob Smith', '[email protected]'),
('Charlie Brown', '[email protected]')
ON CONFLICT (email) DO NOTHING;
scripts/wait-for-services.js
var http = require('http');
var maxRetries = 30;
var retryInterval = 2000;
var healthUrl = 'http://localhost:3000/health';
function checkHealth(attempt) {
if (attempt > maxRetries) {
console.error('Services did not become healthy after ' + maxRetries + ' attempts');
process.exit(1);
}
var req = http.get(healthUrl, function(res) {
var body = '';
res.on('data', function(chunk) { body += chunk; });
res.on('end', function() {
if (res.statusCode === 200) {
console.log('All services healthy:', body);
process.exit(0);
} else {
console.log('Attempt ' + attempt + ': Not all services ready yet');
setTimeout(function() { checkHealth(attempt + 1); }, retryInterval);
}
});
});
req.on('error', function() {
console.log('Attempt ' + attempt + ': App not reachable yet');
setTimeout(function() { checkHealth(attempt + 1); }, retryInterval);
});
}
console.log('Waiting for all services to be healthy...');
checkHealth(1);
Running the Complete Example
# First time setup
cp .env.example .env
docker compose up --build
# Expected output:
# [+] Running 4/4
# ✔ Container myapp-redis-1 Healthy
# ✔ Container myapp-db-1 Healthy
# ✔ Container myapp-mongo-1 Healthy
# ✔ Container myapp-app-1 Started
# Server running on port 3000
# Environment: development
# Test the health endpoint
curl http://localhost:3000/health
# {"uptime":5.23,"timestamp":1706900000000,"postgres":"healthy","mongo":"healthy","redis":"healthy"}
# Test the users endpoint (first call - cache miss)
curl http://localhost:3000/api/users
# [{"id":1,"name":"Alice Johnson","email":"[email protected]"},...]
# Second call - cache hit (served from Redis)
curl http://localhost:3000/api/users
# Same response, but from cache
Common Issues & Troubleshooting
1. Port Already in Use
Error response from daemon: driver failed programming external connectivity on endpoint myapp-db-1:
Bind for 0.0.0.0:5432 failed: port is already allocated
Cause: Another process (or another Docker Compose project) is already using port 5432 on your host.
Fix: Either stop the conflicting process, or change the host port mapping:
ports:
- "5433:5432" # Map to a different host port
Find what is using the port:
# Mac/Linux
lsof -i :5432
# Windows
netstat -ano | findstr :5432
2. Container Cannot Connect to Database on Startup
Error: connect ECONNREFUSED 172.18.0.2:5432
Cause: The Node.js app started before PostgreSQL finished initializing. You are using depends_on without condition: service_healthy.
Fix: Add health checks to your database services and use condition: service_healthy:
depends_on:
db:
condition: service_healthy
Without the health check condition, depends_on only waits for the container to start, not for PostgreSQL to be ready to accept connections.
3. File Changes Not Detected Inside Container
# You edit app.js but nodemon does not restart
Cause: On some systems (especially WSL 2 with files on Windows mounts), file system events do not propagate into the container. Nodemon relies on inotify events by default.
Fix: Configure nodemon to use polling instead of native file watching:
command: npx nodemon --legacy-watch --polling-interval 1000 app.js
Or create a nodemon.json in your project root:
{
"watch": ["app.js", "routes/", "models/"],
"ext": "js,json",
"legacyWatch": true,
"pollingInterval": 1000
}
Polling uses more CPU than native watching, but it works reliably across all platforms.
4. Permission Denied on Bind-Mounted Volume
Error: EACCES: permission denied, open '/app/logs/app.log'
Cause: The Node.js process inside the container runs as node (UID 1000) but the bind-mounted directory on your host is owned by a different user, or the container process is trying to write to a directory that was created by root during the image build.
Fix: Ensure the user inside the container owns the directories it needs to write to:
RUN mkdir -p /app/logs && chown -R node:node /app/logs
USER node
Or in the compose file, explicitly set the user:
services:
app:
user: "${UID:-1000}:${GID:-1000}"
5. Named Volume Contains Stale Data After Schema Change
ERROR: relation "users" already exists with different schema
Cause: PostgreSQL only runs init scripts on the first initialization. If you change init.sql after the volume has been created, PostgreSQL ignores the new version because the data directory already exists.
Fix: Remove the volume and let PostgreSQL reinitialize:
docker compose down -v
docker compose up
The -v flag removes all named volumes. PostgreSQL will see an empty data directory and run your init scripts again.
6. Out of Disk Space from Unused Images and Volumes
Error: no space left on device
Cause: Over time, Docker accumulates stopped containers, unused images, and dangling volumes. On a busy development machine, this can consume tens of gigabytes.
Fix: Run a cleanup:
# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune
# Also remove unused volumes (be careful - this deletes data)
docker system prune --volumes
# See what is using disk space
docker system df
Best Practices
Always use named volumes for database data. Anonymous volumes are harder to identify and manage. Named volumes make it obvious what data belongs to which service, and they survive
docker compose downwithout the-vflag.Never expose database ports in production compose files. In development, exposing PostgreSQL on port 5432 is convenient for connecting with GUI tools. In production, databases should only be reachable from your application containers via the internal Docker network.
Use
.env.examplefor documentation,.envfor actual values. Check.env.exampleinto source control with safe defaults. Add.envto.gitignore. This way, new developers know exactly what environment variables are needed, and nobody accidentally commits real credentials.Pin image versions, do not use
latest. Usepostgres:16-alpine, notpostgres:latest. When a new PostgreSQL major version is released,latestwill update silently and potentially break your application. Pinning versions means upgrades are deliberate and tested.Keep development and production compose files separate. The override file pattern (
docker-compose.override.yml) keeps development concerns like port exposure, bind mounts, and debug ports out of your base configuration. Production never needs nodemon, exposed database ports, or bind-mounted source code.Add health checks to every service. Health checks cost almost nothing in terms of performance, but they prevent the single most common Docker Compose frustration: application containers crashing because they started before their dependencies were ready.
Use
docker compose downinstead of Ctrl+C for a clean shutdown. Ctrl+C sends SIGTERM to the foreground process, but does not always clean up networks and orphaned containers.docker compose downis deterministic and thorough.Separate your
node_moduleswith an anonymous volume. Always include- /app/node_modulesin your volume mounts when bind-mounting your project directory. Without this, your host'snode_modules(built for Mac or Windows) overwrites the container'snode_modules(built for Alpine Linux), causing native module crashes.Use
docker compose run --rmfor one-off tasks. Running migrations, tests, or seed scripts should useruninstead ofexec. The--rmflag automatically removes the container when the command finishes, preventing container accumulation.Profile your volume performance. If your Node.js app feels sluggish inside Docker on Mac or Windows, the file system is almost certainly the bottleneck. Switch to VirtioFS on Mac, use WSL 2 native filesystem on Windows, and mount only the directories you need.
