Real-World MCP Use Cases and Architectures
A practical guide to real-world MCP server use cases and architecture patterns, covering database servers, API gateways, DevOps automation, monitoring dashboards, and multi-server composition.
Real-World MCP Use Cases and Architectures
Overview
The Model Context Protocol gives AI models a standardized way to reach into your infrastructure -- databases, APIs, deployment pipelines, monitoring systems -- through well-defined tool interfaces. The protocol itself is simple, but the interesting part is figuring out what to expose and how to organize your servers once you move past toy examples. This article walks through seven real-world MCP server use cases I have built or seen deployed in production, then digs into the architecture patterns that emerge when you compose multiple servers together.
Prerequisites
- Node.js 18+ installed
- npm for package management
- Familiarity with the MCP SDK basics (tools, resources, transports)
- Understanding of JSON-RPC 2.0 fundamentals
- Working knowledge of PostgreSQL, REST APIs, or CI/CD pipelines (depending on which use case you are implementing)
- An MCP host like Claude Desktop or a custom MCP client for testing
Use Case 1: Database Query Server
This is the single most useful MCP server I have built. You expose your PostgreSQL (or MySQL, SQLite, whatever) database through MCP tools, and the AI model can answer questions about your data in natural language. The model generates SQL, your server executes it, and the results come back as structured data.
The critical design decision here is safety. You do not hand the model unrestricted DELETE and DROP TABLE access. You define read-only tools with parameterized queries, or at most a query tool that runs inside a read-only transaction.
// db-server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { Pool } = require("pg");
var { z } = require("zod");
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 5
});
var server = new McpServer({
name: "postgres-query-server",
version: "1.0.0"
});
// Read-only query tool
server.tool(
"query",
"Execute a read-only SQL query against the database",
{
sql: z.string().describe("SQL SELECT query to execute"),
params: z.array(z.any()).optional().describe("Query parameters ($1, $2, etc.)")
},
function(args) {
return pool.connect().then(function(client) {
return client.query("BEGIN TRANSACTION READ ONLY")
.then(function() {
return client.query(args.sql, args.params || []);
})
.then(function(result) {
return client.query("ROLLBACK").then(function() {
client.release();
return {
content: [{
type: "text",
text: JSON.stringify({
rows: result.rows,
rowCount: result.rowCount,
fields: result.fields.map(function(f) { return f.name; })
}, null, 2)
}]
};
});
})
.catch(function(err) {
client.release();
return {
content: [{ type: "text", text: "Query error: " + err.message }],
isError: true
};
});
});
}
);
// Schema discovery tool
server.tool(
"list_tables",
"List all tables in the database with their columns and types",
{},
function() {
var sql = "SELECT table_name, column_name, data_type, is_nullable " +
"FROM information_schema.columns " +
"WHERE table_schema = 'public' " +
"ORDER BY table_name, ordinal_position";
return pool.query(sql).then(function(result) {
var tables = {};
result.rows.forEach(function(row) {
if (!tables[row.table_name]) {
tables[row.table_name] = [];
}
tables[row.table_name].push({
column: row.column_name,
type: row.data_type,
nullable: row.is_nullable === "YES"
});
});
return {
content: [{ type: "text", text: JSON.stringify(tables, null, 2) }]
};
});
}
);
var transport = new StdioServerTransport();
server.connect(transport);
The list_tables tool is essential. Without it, the model is guessing at your schema. With it, the model can inspect the database structure before generating queries, which dramatically improves accuracy.
Use Case 2: File System Server
A file system MCP server lets the model read, write, search, and manage files within a project directory. The key constraint is sandboxing -- you must restrict operations to an allowed directory tree. Never let the model reach outside the project root.
// fs-server.js (simplified core)
var path = require("path");
var fs = require("fs");
var PROJECT_ROOT = process.env.PROJECT_ROOT || process.cwd();
function validatePath(requestedPath) {
var resolved = path.resolve(PROJECT_ROOT, requestedPath);
if (!resolved.startsWith(PROJECT_ROOT)) {
throw new Error("Access denied: path is outside project root");
}
return resolved;
}
server.tool(
"read_file",
"Read a file's contents within the project directory",
{ path: z.string().describe("Relative path from project root") },
function(args) {
var fullPath = validatePath(args.path);
var content = fs.readFileSync(fullPath, "utf-8");
return {
content: [{ type: "text", text: content }]
};
}
);
server.tool(
"search_files",
"Search for files matching a glob pattern",
{ pattern: z.string().describe("Glob pattern like **/*.js") },
function(args) {
var glob = require("glob");
var matches = glob.sync(args.pattern, { cwd: PROJECT_ROOT, nodir: true });
return {
content: [{ type: "text", text: JSON.stringify(matches, null, 2) }]
};
}
);
I have seen teams go overboard with file system servers, exposing delete_file and write_file with no guardrails. At minimum, maintain an allowlist of file extensions that can be written, and log every write operation. In production environments, I run the file server with a dedicated OS user that only has write access to specific directories.
Use Case 3: API Gateway Server
This pattern wraps one or more REST APIs as MCP tools. Instead of the model generating raw HTTP requests (which it will get wrong half the time), you define tools that map to specific API endpoints with validated parameters.
// api-gateway-server.js
var axios = require("axios");
var apiClient = axios.create({
baseURL: process.env.API_BASE_URL,
headers: { "Authorization": "Bearer " + process.env.API_TOKEN },
timeout: 10000
});
server.tool(
"get_customer",
"Look up a customer by ID or email address",
{
customer_id: z.string().optional().describe("Customer ID"),
email: z.string().email().optional().describe("Customer email address")
},
function(args) {
var endpoint = args.customer_id
? "/customers/" + args.customer_id
: "/customers?email=" + encodeURIComponent(args.email);
return apiClient.get(endpoint).then(function(response) {
return {
content: [{ type: "text", text: JSON.stringify(response.data, null, 2) }]
};
}).catch(function(err) {
return {
content: [{ type: "text", text: "API error: " + (err.response ? err.response.status + " " + JSON.stringify(err.response.data) : err.message) }],
isError: true
};
});
}
);
server.tool(
"list_orders",
"List recent orders for a customer",
{
customer_id: z.string().describe("Customer ID"),
status: z.enum(["pending", "shipped", "delivered", "cancelled"]).optional(),
limit: z.number().max(100).default(20)
},
function(args) {
var params = { limit: args.limit };
if (args.status) params.status = args.status;
return apiClient.get("/customers/" + args.customer_id + "/orders", { params: params })
.then(function(response) {
return {
content: [{ type: "text", text: JSON.stringify(response.data, null, 2) }]
};
});
}
);
The API gateway pattern is where MCP really shines. The model never sees the raw HTTP layer. It sees typed, documented tools with clear parameter schemas. Zod handles validation before the request ever hits your API. This eliminates an entire class of bugs where the model hallucinates query parameters or sends malformed payloads.
Use Case 4: Monitoring Dashboard Server
Expose application metrics, logs, and health checks through MCP tools. This turns your AI assistant into a first-line SRE that can diagnose issues by querying the same monitoring data your team uses.
// monitoring-server.js
var axios = require("axios");
var prometheusUrl = process.env.PROMETHEUS_URL || "http://localhost:9090";
server.tool(
"query_metrics",
"Execute a PromQL query against Prometheus",
{
query: z.string().describe("PromQL query expression"),
range: z.enum(["5m", "15m", "1h", "6h", "24h"]).default("15m")
},
function(args) {
var now = Math.floor(Date.now() / 1000);
var rangeSeconds = {
"5m": 300, "15m": 900, "1h": 3600, "6h": 21600, "24h": 86400
};
var start = now - rangeSeconds[args.range];
return axios.get(prometheusUrl + "/api/v1/query_range", {
params: {
query: args.query,
start: start,
end: now,
step: Math.max(Math.floor(rangeSeconds[args.range] / 100), 15)
}
}).then(function(response) {
return {
content: [{ type: "text", text: JSON.stringify(response.data.data, null, 2) }]
};
});
}
);
server.tool(
"get_recent_logs",
"Retrieve recent application logs filtered by level and service",
{
service: z.string().describe("Service name"),
level: z.enum(["error", "warn", "info", "debug"]).default("error"),
minutes: z.number().max(60).default(15)
},
function(args) {
var lokiUrl = process.env.LOKI_URL || "http://localhost:3100";
var query = '{service="' + args.service + '"} |= "' + args.level.toUpperCase() + '"';
var end = Date.now() * 1000000; // nanoseconds
var start = end - (args.minutes * 60 * 1000000000);
return axios.get(lokiUrl + "/loki/api/v1/query_range", {
params: { query: query, start: start, end: end, limit: 100 }
}).then(function(response) {
var entries = [];
(response.data.data.result || []).forEach(function(stream) {
stream.values.forEach(function(v) {
entries.push({ timestamp: v[0], message: v[1] });
});
});
return {
content: [{ type: "text", text: JSON.stringify(entries.slice(0, 50), null, 2) }]
};
});
}
);
server.tool(
"health_check",
"Check the health status of all registered services",
{},
function() {
var services = JSON.parse(process.env.HEALTH_ENDPOINTS || "{}");
var checks = Object.keys(services).map(function(name) {
return axios.get(services[name], { timeout: 5000 })
.then(function(res) { return { service: name, status: "healthy", code: res.status }; })
.catch(function(err) { return { service: name, status: "unhealthy", error: err.message }; });
});
return Promise.all(checks).then(function(results) {
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
};
});
}
);
The monitoring server is one of those tools that pays for itself immediately. Instead of context-switching to Grafana, writing PromQL, and squinting at log streams, you ask the model "are there any errors in the payment service in the last hour?" and it calls the right tools to find out.
Use Case 5: Code Analysis Server
This server exposes AST parsing, dependency analysis, and code search capabilities. It gives the model deep structural understanding of a codebase beyond simple text search.
// code-analysis-server.js
var acorn = require("acorn");
var walk = require("acorn-walk");
var madge = require("madge");
server.tool(
"analyze_dependencies",
"Generate a dependency graph for a JavaScript/Node.js project",
{
entry_point: z.string().describe("Entry file path relative to project root"),
depth: z.number().max(10).default(3)
},
function(args) {
return madge(path.resolve(PROJECT_ROOT, args.entry_point), {
baseDir: PROJECT_ROOT
}).then(function(res) {
var graph = res.obj();
var circular = res.circular();
return {
content: [{
type: "text",
text: JSON.stringify({
dependencies: graph,
circular_dependencies: circular,
total_modules: Object.keys(graph).length
}, null, 2)
}]
};
});
}
);
server.tool(
"parse_functions",
"Extract all function declarations and exports from a JavaScript file",
{ file_path: z.string().describe("File path relative to project root") },
function(args) {
var fullPath = validatePath(args.file_path);
var code = fs.readFileSync(fullPath, "utf-8");
var ast = acorn.parse(code, { ecmaVersion: 2020, sourceType: "module" });
var functions = [];
walk.simple(ast, {
FunctionDeclaration: function(node) {
functions.push({
name: node.id ? node.id.name : "(anonymous)",
params: node.params.map(function(p) { return p.name || "..."; }),
line: node.loc ? node.loc.start.line : null,
async: node.async
});
},
VariableDeclarator: function(node) {
if (node.init && (node.init.type === "FunctionExpression" ||
node.init.type === "ArrowFunctionExpression")) {
functions.push({
name: node.id.name,
params: node.init.params.map(function(p) { return p.name || "..."; }),
line: node.loc ? node.loc.start.line : null,
async: node.init.async
});
}
}
});
return {
content: [{ type: "text", text: JSON.stringify(functions, null, 2) }]
};
}
);
The dependency graph tool is particularly useful for onboarding. Ask the model "what are the core modules in this project and how do they relate to each other?" and it generates an actual structural map from the code, not from stale documentation.
Use Case 6: Documentation Server
A documentation MCP server indexes and searches your internal knowledge base -- wiki pages, API docs, runbooks, architecture decision records. This turns your AI assistant into a team member who has actually read the docs.
// docs-server.js
var fs = require("fs");
var path = require("path");
var FlexSearch = require("flexsearch");
var index = new FlexSearch.Document({
document: {
id: "id",
index: ["title", "content"],
store: ["title", "path", "content"]
}
});
// Index all markdown files at startup
function indexDocs(docsDir) {
var id = 0;
var files = require("glob").sync("**/*.md", { cwd: docsDir });
files.forEach(function(file) {
var content = fs.readFileSync(path.join(docsDir, file), "utf-8");
var titleMatch = content.match(/^#\s+(.+)$/m);
index.add({
id: id++,
title: titleMatch ? titleMatch[1] : path.basename(file, ".md"),
path: file,
content: content
});
});
console.error("Indexed " + id + " documents");
}
indexDocs(process.env.DOCS_DIR || "./docs");
server.tool(
"search_docs",
"Search the documentation knowledge base",
{
query: z.string().describe("Search query"),
limit: z.number().max(20).default(5)
},
function(args) {
var results = index.search(args.query, { limit: args.limit, enrich: true });
var docs = [];
results.forEach(function(field) {
field.result.forEach(function(r) {
var snippet = r.doc.content.substring(0, 500);
docs.push({ title: r.doc.title, path: r.doc.path, snippet: snippet });
});
});
return {
content: [{ type: "text", text: JSON.stringify(docs, null, 2) }]
};
}
);
server.tool(
"read_doc",
"Read the full content of a specific document",
{ path: z.string().describe("Document path from search results") },
function(args) {
var docsDir = process.env.DOCS_DIR || "./docs";
var fullPath = path.resolve(docsDir, args.path);
if (!fullPath.startsWith(path.resolve(docsDir))) {
return { content: [{ type: "text", text: "Access denied" }], isError: true };
}
var content = fs.readFileSync(fullPath, "utf-8");
return {
content: [{ type: "text", text: content }]
};
}
);
Use Case 7: DevOps Automation Server
This one is powerful and needs careful access control. A DevOps MCP server lets the model check pipeline status, trigger deployments, inspect infrastructure state, and manage environment configurations. You are effectively giving the model a handle on your CI/CD system.
// devops-server.js
var axios = require("axios");
var azureOrg = process.env.AZURE_ORG;
var azureProject = process.env.AZURE_PROJECT;
var azurePat = process.env.AZURE_PAT;
var azureClient = axios.create({
baseURL: "https://dev.azure.com/" + azureOrg + "/" + azureProject + "/_apis",
headers: {
"Authorization": "Basic " + Buffer.from(":" + azurePat).toString("base64")
},
params: { "api-version": "7.1" }
});
server.tool(
"list_pipelines",
"List all build/release pipelines",
{},
function() {
return azureClient.get("/pipelines").then(function(response) {
var pipelines = response.data.value.map(function(p) {
return { id: p.id, name: p.name, folder: p.folder };
});
return {
content: [{ type: "text", text: JSON.stringify(pipelines, null, 2) }]
};
});
}
);
server.tool(
"get_pipeline_runs",
"Get recent runs for a specific pipeline",
{
pipeline_id: z.number().describe("Pipeline ID"),
count: z.number().max(20).default(5)
},
function(args) {
return azureClient.get("/pipelines/" + args.pipeline_id + "/runs", {
params: { "$top": args.count }
}).then(function(response) {
var runs = response.data.value.map(function(r) {
return {
id: r.id,
state: r.state,
result: r.result,
createdDate: r.createdDate,
finishedDate: r.finishedDate
};
});
return {
content: [{ type: "text", text: JSON.stringify(runs, null, 2) }]
};
});
}
);
server.tool(
"trigger_pipeline",
"Trigger a pipeline run (requires confirmation token)",
{
pipeline_id: z.number().describe("Pipeline ID"),
branch: z.string().default("refs/heads/main"),
confirmation: z.string().describe("Type CONFIRM to proceed")
},
function(args) {
if (args.confirmation !== "CONFIRM") {
return {
content: [{ type: "text", text: "Pipeline trigger requires confirmation. Pass confirmation: 'CONFIRM' to proceed." }],
isError: true
};
}
return azureClient.post("/pipelines/" + args.pipeline_id + "/runs", {
resources: { repositories: { self: { refName: args.branch } } }
}).then(function(response) {
return {
content: [{ type: "text", text: "Pipeline triggered. Run ID: " + response.data.id + ", State: " + response.data.state }]
};
});
}
);
Notice the confirmation parameter on trigger_pipeline. This is a pattern I use for any destructive or side-effect-heavy operation. The model has to explicitly pass "CONFIRM" as a parameter, which means the user sees the tool call and can reject it before execution. It is not bulletproof, but it adds a meaningful friction layer.
Multi-Server Architectures
Once you move past a single MCP server, you need to think about composition. The MCP specification allows a host to connect to multiple servers simultaneously, each exposing its own set of tools. This is where architecture decisions get interesting.
One Big Server vs. Many Focused Servers
I have tried both approaches. Here is what I have learned:
One big server with 20+ tools works fine for small teams and simple setups. Everything runs in one process, shares one connection pool, and deploys as one unit. The downside is that a bug in your file system code can crash your database tools. No isolation.
Many focused servers (one for database, one for DevOps, one for docs, etc.) give you process isolation, independent deployment, and the ability to compose different server sets for different contexts. The downside is operational complexity -- more processes to manage, more configurations to maintain.
My recommendation: start with focused servers. Two or three servers with 3-5 tools each is far more manageable than one server with 15 tools. The model handles multi-server setups natively. It discovers tools from all connected servers and can interleave calls across them.
Architecture Patterns
Gateway Pattern. A single MCP gateway server sits between the client and your backend services. It exposes a unified tool interface but delegates to multiple internal services. This is good when you want to centralize authentication, rate limiting, and audit logging.
MCP Client
└── MCP Gateway Server
├── Internal DB Service
├── Internal API Service
└── Internal DevOps Service
Sidecar Pattern. Each microservice in your architecture gets its own MCP server that exposes that service's capabilities. The servers run as sidecars alongside the services they represent. This works well in Kubernetes environments.
MCP Client
├── Orders Service MCP Server (sidecar)
├── Users Service MCP Server (sidecar)
└── Payments Service MCP Server (sidecar)
Hub-and-Spoke Pattern. A central "hub" MCP server handles discovery and routing. Spoke servers register their tools with the hub. The client connects only to the hub, which delegates tool calls to the appropriate spoke. This is the most complex pattern but scales the best.
MCP Client
└── Hub Server (discovery + routing)
├── Spoke: Database Server
├── Spoke: DevOps Server
├── Spoke: Monitoring Server
└── Spoke: Docs Server
I use the hub-and-spoke pattern in production environments with more than five MCP servers. For anything under five servers, direct multi-server connections from the client are simpler and work fine.
Choosing the Right Architecture
| Factor | Direct Multi-Server | Gateway | Hub-and-Spoke |
|---|---|---|---|
| Complexity | Low | Medium | High |
| Isolation | High | Low | High |
| Centralized Auth | No | Yes | Yes |
| Max Servers | ~5 | Unlimited | Unlimited |
| Deployment | Independent | Coupled | Independent |
| Best For | Small teams | API-centric | Large orgs |
Complete Working Example: Database + DevOps Multi-Server
Here is a complete working example with two complementary MCP servers and a client that connects to both. The database server queries PostgreSQL, and the DevOps server checks Azure DevOps pipeline status. The client discovers tools from both and demonstrates a cross-server workflow: checking which database migrations are pending, then verifying the deployment pipeline is green before approving a release.
Server 1: Database Server (db-server.js)
// db-server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { z } = require("zod");
var { Pool } = require("pg");
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 5
});
var server = new McpServer({
name: "database-server",
version: "1.0.0",
description: "PostgreSQL query server with read-only access"
});
server.tool(
"db_query",
"Execute a read-only SQL query",
{
sql: z.string().describe("SQL SELECT statement"),
params: z.array(z.any()).optional().describe("Parameterized values")
},
function(args) {
return pool.connect().then(function(client) {
return client.query("SET TRANSACTION READ ONLY")
.then(function() { return client.query("BEGIN"); })
.then(function() { return client.query(args.sql, args.params || []); })
.then(function(result) {
return client.query("ROLLBACK").then(function() {
client.release();
return {
content: [{
type: "text",
text: JSON.stringify({
rows: result.rows.slice(0, 100),
rowCount: result.rowCount,
columns: result.fields.map(function(f) { return f.name; })
}, null, 2)
}]
};
});
})
.catch(function(err) {
return client.query("ROLLBACK").catch(function() {}).then(function() {
client.release();
return {
content: [{ type: "text", text: "SQL Error: " + err.message }],
isError: true
};
});
});
});
}
);
server.tool(
"db_schema",
"Get the schema for a specific table or all tables",
{
table_name: z.string().optional().describe("Specific table name, or omit for all tables")
},
function(args) {
var sql = "SELECT table_name, column_name, data_type, column_default, is_nullable " +
"FROM information_schema.columns WHERE table_schema = 'public'";
var params = [];
if (args.table_name) {
sql += " AND table_name = $1";
params.push(args.table_name);
}
sql += " ORDER BY table_name, ordinal_position";
return pool.query(sql, params).then(function(result) {
return {
content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }]
};
});
}
);
server.tool(
"db_migrations",
"Check pending database migrations",
{},
function() {
var sql = "SELECT name, run_on FROM migrations ORDER BY run_on DESC LIMIT 20";
return pool.query(sql).then(function(result) {
return {
content: [{
type: "text",
text: JSON.stringify({
applied_migrations: result.rows,
total: result.rowCount
}, null, 2)
}]
};
}).catch(function(err) {
return {
content: [{ type: "text", text: "No migrations table found or error: " + err.message }],
isError: true
};
});
}
);
var transport = new StdioServerTransport();
server.connect(transport).then(function() {
console.error("Database MCP server running");
});
Server 2: DevOps Server (devops-server.js)
// devops-server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { z } = require("zod");
var axios = require("axios");
var org = process.env.AZURE_ORG;
var project = process.env.AZURE_PROJECT;
var pat = process.env.AZURE_PAT;
var client = axios.create({
baseURL: "https://dev.azure.com/" + org + "/" + project + "/_apis",
headers: {
"Authorization": "Basic " + Buffer.from(":" + pat).toString("base64")
},
params: { "api-version": "7.1" },
timeout: 15000
});
var server = new McpServer({
name: "devops-server",
version: "1.0.0",
description: "Azure DevOps pipeline status and management"
});
server.tool(
"pipeline_status",
"Get the latest run status for a pipeline by name or ID",
{
pipeline_name: z.string().optional().describe("Pipeline name (partial match)"),
pipeline_id: z.number().optional().describe("Pipeline ID")
},
function(args) {
var pipelinePromise;
if (args.pipeline_id) {
pipelinePromise = Promise.resolve(args.pipeline_id);
} else {
pipelinePromise = client.get("/pipelines").then(function(res) {
var match = res.data.value.find(function(p) {
return p.name.toLowerCase().indexOf(args.pipeline_name.toLowerCase()) !== -1;
});
if (!match) throw new Error("No pipeline matching: " + args.pipeline_name);
return match.id;
});
}
return pipelinePromise.then(function(pipelineId) {
return client.get("/pipelines/" + pipelineId + "/runs", {
params: { "$top": 5 }
});
}).then(function(res) {
var runs = res.data.value.map(function(r) {
return {
runId: r.id,
state: r.state,
result: r.result || "in_progress",
created: r.createdDate,
finished: r.finishedDate || null,
url: r._links.web.href
};
});
return {
content: [{ type: "text", text: JSON.stringify(runs, null, 2) }]
};
}).catch(function(err) {
return {
content: [{ type: "text", text: "DevOps error: " + err.message }],
isError: true
};
});
}
);
server.tool(
"pipeline_logs",
"Get logs from a specific pipeline run",
{
pipeline_id: z.number().describe("Pipeline ID"),
run_id: z.number().describe("Run ID")
},
function(args) {
return client.get("/pipelines/" + args.pipeline_id + "/runs/" + args.run_id + "/logs")
.then(function(res) {
var logs = res.data.logs.map(function(log) {
return { id: log.id, lineCount: log.lineCount, url: log.url };
});
return {
content: [{ type: "text", text: JSON.stringify(logs, null, 2) }]
};
});
}
);
server.tool(
"deployment_check",
"Verify all pipelines are green for a release",
{
pipeline_names: z.array(z.string()).describe("List of pipeline names that must be green")
},
function(args) {
return client.get("/pipelines").then(function(res) {
var allPipelines = res.data.value;
var checkPromises = args.pipeline_names.map(function(name) {
var pipeline = allPipelines.find(function(p) {
return p.name.toLowerCase().indexOf(name.toLowerCase()) !== -1;
});
if (!pipeline) {
return Promise.resolve({ name: name, status: "not_found" });
}
return client.get("/pipelines/" + pipeline.id + "/runs", {
params: { "$top": 1 }
}).then(function(runRes) {
var latest = runRes.data.value[0];
return {
name: name,
pipelineId: pipeline.id,
status: latest ? latest.result || latest.state : "no_runs",
runId: latest ? latest.id : null,
finished: latest ? latest.finishedDate : null
};
});
});
return Promise.all(checkPromises).then(function(results) {
var allGreen = results.every(function(r) { return r.status === "succeeded"; });
return {
content: [{
type: "text",
text: JSON.stringify({
release_ready: allGreen,
checks: results
}, null, 2)
}]
};
});
});
}
);
var transport = new StdioServerTransport();
server.connect(transport).then(function() {
console.error("DevOps MCP server running");
});
Client: Multi-Server Orchestration (client.js)
// client.js
var { Client } = require("@modelcontextprotocol/sdk/client/index.js");
var { StdioClientTransport } = require("@modelcontextprotocol/sdk/client/stdio.js");
function createClient(name, command, args, env) {
var transport = new StdioClientTransport({
command: command,
args: args,
env: Object.assign({}, process.env, env || {})
});
var client = new Client({ name: name, version: "1.0.0" }, {});
return client.connect(transport).then(function() {
return client;
});
}
function discoverTools(client, serverName) {
return client.listTools().then(function(result) {
console.log("\n=== Tools from " + serverName + " ===");
result.tools.forEach(function(tool) {
console.log(" " + tool.name + ": " + tool.description);
});
return result.tools;
});
}
async function main() {
// Connect to both servers
var dbClient = await createClient("db-client", "node", ["db-server.js"], {
DATABASE_URL: process.env.DATABASE_URL
});
var devopsClient = await createClient("devops-client", "node", ["devops-server.js"], {
AZURE_ORG: process.env.AZURE_ORG,
AZURE_PROJECT: process.env.AZURE_PROJECT,
AZURE_PAT: process.env.AZURE_PAT
});
// Discover tools from both servers
var dbTools = await discoverTools(dbClient, "Database Server");
var devopsTools = await discoverTools(devopsClient, "DevOps Server");
console.log("\nTotal tools available: " + (dbTools.length + devopsTools.length));
// Cross-server workflow: check migrations then verify pipeline
console.log("\n=== Cross-Server Workflow: Release Readiness Check ===\n");
// Step 1: Check database for pending migrations
console.log("Step 1: Checking database migrations...");
var migrationResult = await dbClient.callTool({
name: "db_migrations",
arguments: {}
});
console.log("Migrations: " + migrationResult.content[0].text);
// Step 2: Verify the deployment pipeline is green
console.log("\nStep 2: Checking deployment pipeline status...");
var pipelineResult = await devopsClient.callTool({
name: "deployment_check",
arguments: {
pipeline_names: ["build-and-test", "staging-deploy"]
}
});
console.log("Pipeline status: " + pipelineResult.content[0].text);
// Step 3: Query the database for the current application version
console.log("\nStep 3: Checking current deployed version...");
var versionResult = await dbClient.callTool({
name: "db_query",
arguments: {
sql: "SELECT version, deployed_at FROM app_versions ORDER BY deployed_at DESC LIMIT 1"
}
});
console.log("Current version: " + versionResult.content[0].text);
// Cleanup
await dbClient.close();
await devopsClient.close();
console.log("\nDone. Both servers disconnected.");
}
main().catch(function(err) {
console.error("Fatal error:", err);
process.exit(1);
});
Run the client:
export DATABASE_URL="postgresql://user:pass@localhost:5432/myapp"
export AZURE_ORG="mycompany"
export AZURE_PROJECT="myproject"
export AZURE_PAT="your-personal-access-token"
node client.js
Expected output:
=== Tools from Database Server ===
db_query: Execute a read-only SQL query
db_schema: Get the schema for a specific table or all tables
db_migrations: Check pending database migrations
=== Tools from DevOps Server ===
pipeline_status: Get the latest run status for a pipeline by name or ID
pipeline_logs: Get logs from a specific pipeline run
deployment_check: Verify all pipelines are green for a release
Total tools available: 6
=== Cross-Server Workflow: Release Readiness Check ===
Step 1: Checking database migrations...
Migrations: {"applied_migrations":[{"name":"20260115_add_user_preferences","run_on":"2026-01-15T10:30:00Z"},{"name":"20260110_create_orders_table","run_on":"2026-01-10T08:15:00Z"}],"total":2}
Step 2: Checking deployment pipeline status...
Pipeline status: {"release_ready":true,"checks":[{"name":"build-and-test","pipelineId":42,"status":"succeeded","runId":1087,"finished":"2026-02-07T14:22:00Z"},{"name":"staging-deploy","pipelineId":43,"status":"succeeded","runId":512,"finished":"2026-02-07T14:45:00Z"}]}
Step 3: Checking current deployed version...
Current version: {"rows":[{"version":"2.4.1","deployed_at":"2026-02-07T15:00:00Z"}],"rowCount":1,"columns":["version","deployed_at"]}
Done. Both servers disconnected.
Claude Desktop Configuration
To use both servers from Claude Desktop, add them to your configuration:
{
"mcpServers": {
"database": {
"command": "node",
"args": ["c:/projects/mcp-servers/db-server.js"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/myapp"
}
},
"devops": {
"command": "node",
"args": ["c:/projects/mcp-servers/devops-server.js"],
"env": {
"AZURE_ORG": "mycompany",
"AZURE_PROJECT": "myproject",
"AZURE_PAT": "your-pat-here"
}
}
}
}
With this configuration, you can ask Claude things like: "Check if there are any pending database migrations, verify the staging pipeline is green, and tell me if we are ready to deploy." Claude will call tools from both servers to assemble a comprehensive answer.
Common Issues and Troubleshooting
1. Server Crashes with "Cannot find module" on Startup
Error: Cannot find module '@modelcontextprotocol/sdk/server/mcp.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1075:15)
This happens when the SDK is not installed or when your node_modules path resolution fails. The MCP SDK uses subpath exports, which requires Node.js 18+. Verify your Node version and install the package:
node --version # Must be >= 18.0.0
npm install @modelcontextprotocol/sdk
If you are on Node 16 or earlier, upgrade. The subpath exports syntax (@modelcontextprotocol/sdk/server/mcp.js) is not supported on older versions.
2. Tool Calls Timeout with No Error Response
MCP error -32001: Request timed out after 60000ms
The default MCP request timeout is 60 seconds. If your database query or API call takes longer than that, the client gives up. The fix is either optimizing your query or increasing the timeout on the client side:
var client = new Client(
{ name: "my-client", version: "1.0.0" },
{ requestTimeout: 120000 } // 2 minutes
);
For database servers specifically, always add a query timeout so a runaway query does not hang the server process:
client.query({ text: args.sql, values: args.params, timeout: 30000 });
3. stderr Output Gets Mixed into JSON-RPC Messages
SyntaxError: Unexpected token 'S' at position 0
(received: "Server starting up...\n{"jsonrpc":"2.0"..."
This is the most common MCP bug. When using stdio transport, the server must not write anything to stdout except valid JSON-RPC messages. All logging must go to stderr. If you use console.log() anywhere in your server code, it corrupts the transport.
// WRONG - this breaks stdio transport
console.log("Connected to database");
// RIGHT - use stderr for all logging
console.error("Connected to database");
Check your dependencies too. Some database drivers or HTTP libraries print warnings to stdout by default.
4. Zod Validation Errors Return Cryptic Messages
MCP error -32602: Invalid params
[
{
"code": "invalid_type",
"expected": "string",
"received": "undefined",
"path": ["sql"],
"message": "Required"
}
]
This happens when the model omits a required parameter or sends the wrong type. The error message is technically correct but not helpful to the model. Add .describe() to every Zod field with clear instructions, and consider adding a wrapper that reformats validation errors:
// Better descriptions lead to fewer validation errors
{
sql: z.string().describe("A valid SQL SELECT statement. Do not include semicolons."),
limit: z.number().min(1).max(100).default(20).describe("Maximum rows to return (1-100)")
}
5. Connection Refused When Using Streamable HTTP Transport
Error: connect ECONNREFUSED 127.0.0.1:3000
If you are using the HTTP transport instead of stdio, make sure the server is actually listening before the client tries to connect. Add a health endpoint and have the client retry:
// Server side
app.get("/health", function(req, res) { res.json({ status: "ok" }); });
// Client side - retry connection
function connectWithRetry(url, maxRetries) {
var attempts = 0;
function tryConnect() {
attempts++;
return axios.get(url + "/health")
.then(function() { return createHttpTransport(url); })
.catch(function(err) {
if (attempts >= maxRetries) throw err;
return new Promise(function(resolve) {
setTimeout(resolve, 1000 * attempts);
}).then(tryConnect);
});
}
return tryConnect();
}
Best Practices
One server, one concern. Keep MCP servers focused. A database server should not also handle file operations. Focused servers are easier to test, secure, and debug. When you need cross-cutting functionality, use multi-server client connections, not monolithic servers.
Always provide schema discovery tools. For database servers, expose
list_tablesanddescribe_tabletools. For API gateway servers, expose anavailable_endpointstool. The model generates dramatically better tool calls when it can inspect the data shape first instead of guessing.Enforce read-only by default. Wrap database queries in read-only transactions. Use API tokens with minimal permissions. Make write operations require an explicit confirmation parameter. You can always loosen restrictions later; you cannot un-delete production data.
Log every tool call with full arguments. This is your audit trail. Write tool call logs to stderr (for stdio transport) or to a proper logging service. Include the tool name, arguments, execution time, and whether it succeeded or failed. When something goes wrong, these logs are the first thing you reach for.
Set timeouts on everything. Database queries, HTTP requests, file reads -- every external operation should have a timeout. An MCP server that hangs on a slow query blocks the entire conversation. Set a 30-second default timeout on external calls and a separate per-query timeout for database operations.
Validate paths and inputs beyond Zod. Zod handles type validation, but it does not know about your file system layout or database schema. Always validate that file paths resolve within your allowed directory, that SQL does not contain DDL statements, that pipeline IDs actually exist. Defense in depth.
Use environment variables for all credentials. Never hardcode API tokens, database passwords, or PATs in your server code. Use environment variables and document them clearly. For Claude Desktop configurations, use the
envfield in the MCP server config.Handle graceful shutdown. When the MCP client disconnects or the process receives SIGTERM, clean up database connections, close file handles, and flush any pending logs. This prevents connection leaks in long-running deployments.
process.on("SIGTERM", function() {
console.error("Shutting down...");
pool.end().then(function() {
process.exit(0);
});
});
- Version your MCP servers independently. If your database server is stable but your DevOps server is changing rapidly, you should be able to deploy them on different schedules. Use semantic versioning in your server metadata and keep a changelog.
References
- Model Context Protocol Specification -- The official protocol specification with transport definitions, message formats, and capability negotiation
- MCP TypeScript SDK -- Official TypeScript/JavaScript SDK for building MCP servers and clients
- MCP Server Examples -- Reference implementations of MCP servers for various use cases
- Claude Desktop MCP Configuration -- How to configure MCP servers in Claude Desktop
- Azure DevOps REST API Reference -- API documentation for the DevOps server example
- PostgreSQL node-postgres (pg) -- The PostgreSQL client library used in the database server examples
- Zod Documentation -- Schema validation library used for MCP tool parameter definitions
