Mcp

Model Context Protocol Fundamentals

A beginner-friendly introduction to the Model Context Protocol (MCP), covering architecture, core primitives (tools, resources, prompts), transport mechanisms, and building your first MCP server in Node.js.

Model Context Protocol Fundamentals

Overview

The Model Context Protocol (MCP) is an open standard that defines how AI models communicate with external tools, data sources, and services through a single, unified interface. It was created by Anthropic in November 2024, donated to the Linux Foundation in December 2025, and has since become the dominant protocol for connecting AI applications to the outside world. If you are building anything that involves an LLM interacting with external systems -- databases, APIs, file systems, SaaS products -- MCP is the protocol you need to understand.

Prerequisites

  • Node.js 18+ installed (LTS recommended)
  • npm for package management
  • Basic understanding of JSON and HTTP
  • Familiarity with how LLMs use tool calling / function calling
  • A terminal and a text editor

The Problem MCP Solves

Before MCP, connecting AI applications to external systems was a mess of custom integrations. Consider the math: if you have 5 AI applications (Claude, ChatGPT, Gemini, a custom chatbot, a coding assistant) and 10 external services (GitHub, Slack, PostgreSQL, Jira, S3, etc.), you need 5 x 10 = 50 custom integrations. Each AI platform has its own function-calling format, its own schema conventions, its own transport expectations. Every integration is bespoke. Every integration has to be maintained separately.

This is the N x M integration problem. N clients times M services equals an explosion of glue code.

MCP reduces this to N + M. Each AI application implements one MCP client. Each external service implements one MCP server. Any MCP client can talk to any MCP server. You write the integration once on each side and it works everywhere.

Before MCP (N × M):                After MCP (N + M):

Claude ──┬── GitHub                 Claude ──┐
         ├── Slack                  ChatGPT ─┤
         ├── PostgreSQL             Gemini ──┤
ChatGPT ─┬── GitHub                 Custom ──┘
         ├── Slack                       │
         ├── PostgreSQL              MCP Protocol
Gemini ──┬── GitHub                      │
         ├── Slack                  ┌────┴────┐
         ├── PostgreSQL             │ GitHub  │
                                    │ Slack   │
50 integrations                     │ Postgres│
                                    └─────────┘
                                    13 integrations

This is the same insight that made USB successful. Before USB, every peripheral needed its own port and driver. MCP is USB for AI integrations.

MCP Architecture

MCP uses a three-layer architecture with clearly defined roles:

Host

The host is the application that contains or orchestrates the AI model. Claude Desktop, VS Code with Copilot, Cursor, and custom AI applications are all hosts. The host is responsible for managing the lifecycle of MCP clients and enforcing security policies like user consent before tool execution.

Client

The MCP client lives inside the host. Each client maintains a stateful, 1:1 session with a single MCP server. A host can spawn multiple clients to connect to multiple servers simultaneously. The client handles protocol negotiation, capability exchange, and message routing.

Server

The MCP server is what you build. It exposes capabilities -- tools, resources, and prompts -- to the client over a well-defined protocol. A server might wrap a database, a third-party API, a file system, or any other data source or service.

┌─────────────────────────────────────────────┐
│  Host (e.g., Claude Desktop)                │
│                                             │
│  ┌─────────────┐    ┌─────────────┐         │
│  │ MCP Client  │    │ MCP Client  │         │
│  └──────┬──────┘    └──────┬──────┘         │
└─────────┼──────────────────┼────────────────┘
          │                  │
     [Transport]        [Transport]
          │                  │
   ┌──────┴──────┐    ┌─────┴───────┐
   │ MCP Server  │    │ MCP Server  │
   │ (GitHub)    │    │ (Database)  │
   └─────────────┘    └─────────────┘

Each client-server pair has its own independent session. The host decides which servers to connect to, the client manages the protocol, and the server does the actual work.

The Three Core Primitives

MCP defines three core primitives that a server can expose. Understanding these three concepts is the foundation of everything else in the protocol.

Tools

Tools are functions that the AI model can invoke to perform actions. They are the most commonly used primitive. A tool has a name, a description, and an input schema that defines what parameters it accepts. When the model decides it needs to use a tool, the client sends a tools/call request to the server, which executes the logic and returns results.

Examples of tools:

  • Query a database and return results
  • Send a Slack message
  • Create a GitHub issue
  • Run a shell command
  • Convert a file from one format to another

Tools are model-controlled: the AI model decides when and how to call them based on the user's request and the tool's description.

Resources

Resources are read-only data that the model can access. Think of them as files or documents that the model can pull into its context. Each resource is identified by a URI (like file:///path/to/document.txt or db://users/123). Resources can be static (a fixed URI) or dynamic (a URI template with parameters).

Examples of resources:

  • File contents from a local file system
  • Database records
  • API response data
  • Configuration files
  • Log output

Resources are application-controlled: the host or user decides which resources to attach to the conversation. The model does not autonomously decide to read resources the way it decides to call tools.

Prompts

Prompts are reusable message templates that help users (or client UIs) interact with the model in a consistent, structured way. A prompt has a name, a description, optional arguments, and returns a list of messages that get injected into the conversation.

Examples of prompts:

  • A "code review" prompt that takes a code snippet and returns a structured review request
  • A "summarize document" prompt that formats a resource for summarization
  • A "debug error" prompt that structures an error message with relevant context

Prompts are user-controlled: they are typically surfaced in the UI as slash commands or menu items that the user explicitly selects.

┌────────────────────────────────────────────────┐
│              MCP Primitives                    │
├────────────┬────────────────┬──────────────────┤
│   Tools    │   Resources    │    Prompts       │
├────────────┼────────────────┼──────────────────┤
│ Model      │ Application    │ User             │
│ controlled │ controlled     │ controlled       │
├────────────┼────────────────┼──────────────────┤
│ Execute    │ Read-only      │ Message          │
│ actions    │ data access    │ templates        │
├────────────┼────────────────┼──────────────────┤
│ tools/call │ resources/read │ prompts/get      │
│ tools/list │ resources/list │ prompts/list     │
└────────────┴────────────────┴──────────────────┘

Transport Mechanisms

MCP is transport-agnostic, but the specification defines two standard transports.

stdio (Standard I/O)

The stdio transport is designed for local integrations. The host launches the MCP server as a child process and communicates over stdin/stdout. Each JSON-RPC message is a single line of text (no embedded newlines), delimited by newline characters.

This is the transport you use when the MCP server runs on the same machine as the host. Claude Desktop uses this transport for all its local MCP servers. The server process is started by the host and terminated when the session ends.

# The host launches the server as a subprocess
node my-mcp-server.js
# Communication happens over stdin (host → server) and stdout (server → host)
# stderr is available for logging (does not interfere with the protocol)

Advantages of stdio:

  • Zero network configuration
  • No ports, no firewalls, no TLS certificates
  • Simple to debug
  • Process lifecycle is managed by the host

Streamable HTTP

The Streamable HTTP transport is designed for remote integrations. The server runs as an HTTP endpoint, and the client communicates with it over HTTP POST requests. The server can optionally use Server-Sent Events (SSE) to stream multiple messages back to the client in a single response.

This transport was introduced in March 2025 to replace the older HTTP+SSE transport. It supports session management via the Mcp-Session-Id header, and the server can handle multiple concurrent clients.

Client                          Server
  │                               │
  │──POST /mcp (initialize)──────>│
  │<─────────(SSE stream)─────────│
  │                               │
  │──POST /mcp (tools/call)──────>│
  │<─────────(SSE stream)─────────│
  │                               │
  │──GET /mcp (open SSE)─────────>│
  │<───(server-initiated msgs)────│

Use stdio for local tools. Use Streamable HTTP for remote services, multi-tenant deployments, or when the server needs to be shared across multiple clients.

The JSON-RPC Message Format

All MCP communication uses JSON-RPC 2.0. There are three types of messages: requests, responses, and notifications.

Request

A request expects a response. It includes an id for correlation.

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "query_database",
    "arguments": {
      "sql": "SELECT * FROM users LIMIT 10"
    }
  }
}

Response

A response carries either a result or an error, correlated by the same id.

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Found 10 users."
      }
    ]
  }
}

Notification

A notification is a one-way message with no id and no expected response. Used for events like progress updates or capability changes.

{
  "jsonrpc": "2.0",
  "method": "notifications/tools/list_changed"
}

Error Response

Errors use standard JSON-RPC error codes plus MCP-specific codes.

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid params: missing required field 'sql'"
  }
}

Capability Negotiation and the MCP Lifecycle

Every MCP session follows a strict lifecycle: initialize, use, shutdown.

Initialize

The client sends an initialize request announcing its protocol version, capabilities, and identity. The server responds with its own capabilities.

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-11-25",
    "capabilities": {
      "roots": { "listChanged": true },
      "sampling": {}
    },
    "clientInfo": {
      "name": "claude-desktop",
      "version": "1.5.0"
    }
  }
}

The server responds:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2025-11-25",
    "capabilities": {
      "tools": { "listChanged": true },
      "resources": { "subscribe": true, "listChanged": true },
      "prompts": { "listChanged": true }
    },
    "serverInfo": {
      "name": "my-mcp-server",
      "version": "1.0.0"
    }
  }
}

After the server responds, the client sends an initialized notification to confirm the session is ready.

{
  "jsonrpc": "2.0",
  "method": "notifications/initialized"
}

This capability exchange is critical. It tells each side what the other supports. If the server does not advertise tools in its capabilities, the client will not attempt to list or call tools. If the server advertises listChanged: true, it is promising to send a notification whenever its list of tools changes, so the client can re-fetch the list dynamically.

Use

Once initialized, the client can call any of the advertised capabilities. It can list tools (tools/list), call tools (tools/call), list resources (resources/list), read resources (resources/read), list prompts (prompts/list), and get prompts (prompts/get).

Shutdown

The client sends a close notification or simply terminates the transport connection. For stdio, this means killing the child process. For Streamable HTTP, this means sending an HTTP DELETE to the session endpoint.

Tool Definitions and Schemas

When a client sends a tools/list request, the server returns an array of tool definitions. Each tool has a name, a description, and an inputSchema that follows JSON Schema.

{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "tools": [
      {
        "name": "get_weather",
        "description": "Get the current weather for a given city. Returns temperature, conditions, and humidity.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "The city name, e.g. 'San Francisco'"
            },
            "units": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"],
              "description": "Temperature units"
            }
          },
          "required": ["city"]
        }
      }
    ]
  }
}

The description matters enormously. The AI model reads this description to decide whether and how to use the tool. A vague description leads to unreliable tool usage. Be specific about what the tool does, what it returns, and any constraints.

Resource URIs and Templates

Resources use URIs for identification. Static resources have a fixed URI. Dynamic resources use URI templates with placeholders.

{
  "resources": [
    {
      "uri": "config://app/settings",
      "name": "Application Settings",
      "description": "Current application configuration",
      "mimeType": "application/json"
    }
  ],
  "resourceTemplates": [
    {
      "uriTemplate": "db://users/{userId}",
      "name": "User Record",
      "description": "Fetch a user record by ID",
      "mimeType": "application/json"
    }
  ]
}

When the client reads a resource, it sends a resources/read request with the URI:

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "resources/read",
  "params": {
    "uri": "db://users/42"
  }
}

The server returns the resource contents:

{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "contents": [
      {
        "uri": "db://users/42",
        "mimeType": "application/json",
        "text": "{\"id\": 42, \"name\": \"Alice\", \"email\": \"[email protected]\"}"
      }
    ]
  }
}

Prompt Templates

Prompts are reusable message sequences. A prompts/list response looks like:

{
  "prompts": [
    {
      "name": "review-code",
      "description": "Review a code snippet for bugs, style issues, and improvements",
      "arguments": [
        {
          "name": "code",
          "description": "The code to review",
          "required": true
        },
        {
          "name": "language",
          "description": "The programming language",
          "required": false
        }
      ]
    }
  ]
}

When a user selects this prompt, the client sends a prompts/get request, and the server returns a list of messages ready to be injected into the conversation:

{
  "jsonrpc": "2.0",
  "id": 4,
  "result": {
    "messages": [
      {
        "role": "user",
        "content": {
          "type": "text",
          "text": "Please review the following JavaScript code for bugs, performance issues, and style improvements:\n\nfunction add(a, b) { return a + b; }"
        }
      }
    ]
  }
}

MCP vs. LLM Function Calling

If you have used OpenAI's function calling or Anthropic's tool use, you might wonder how MCP is different. The key distinction is scope.

LLM function calling (OpenAI's functions parameter, Anthropic's tools parameter) is a feature of a specific API. You define tools in your API request, the model generates a tool call, and your code executes it. The tool definitions live in your application code and are tightly coupled to one provider.

MCP is a protocol-level standard. Tool definitions live on a separate server. Any MCP-compatible client can discover and use those tools. The tools are not tied to any particular LLM provider or API.

Think of it this way: function calling is like writing a custom driver for one printer. MCP is like implementing the printer protocol so any computer can print to your printer.

Aspect LLM Function Calling MCP
Scope Single API provider Cross-platform standard
Tool definitions In your API request On a separate server
Discovery None (you hardcode tools) Dynamic via tools/list
Transport HTTP API call stdio, Streamable HTTP
State Stateless per request Stateful session
Ecosystem Provider-specific Universal

In practice, MCP clients typically translate MCP tool definitions into the host LLM's native function calling format. The MCP client fetches tools from the server, then passes them to the model as native tool definitions. When the model generates a tool call, the client routes it back to the MCP server for execution.

When to Build an MCP Server vs. a Traditional API

Not everything should be an MCP server. Here is a practical decision framework:

Build an MCP server when:

  • You want AI models to interact with your service
  • The interaction involves structured tool calls, not just data retrieval
  • You want your integration to work across multiple AI platforms
  • You need the model to discover available capabilities dynamically

Stick with a traditional REST/GraphQL API when:

  • Your consumers are conventional applications, not AI models
  • You need fine-grained authentication and rate limiting per endpoint
  • You are building a public API for third-party developers
  • The interaction is purely CRUD with no AI-specific concerns

Many real-world architectures use both. Your MCP server calls your existing REST API internally. The MCP server is a thin adapter that translates between the protocol and your existing infrastructure.

The MCP Ecosystem

MCP has gained broad adoption since its release. Here are the major players:

  • Claude Desktop -- Anthropic's desktop application, the first and most mature MCP host
  • VS Code / Cursor / Windsurf -- Code editors with MCP support for AI-assisted development
  • OpenAI -- Added MCP support for ChatGPT and their Agents SDK
  • Official SDKs -- TypeScript, Python, Java, Kotlin, C#, Swift, Go
  • MCP Inspector -- A debugging tool for testing MCP servers interactively
  • Agentic AI Foundation (AAIF) -- Linux Foundation governance body for MCP, co-founded by Anthropic, Block, and OpenAI

The TypeScript SDK (@modelcontextprotocol/sdk) is the most mature and is the one we will use in the following example.

Complete Working Example

Let us build a minimal but complete MCP server in Node.js that exposes one tool, one resource, and one prompt. The server will be a simple "note-taking" system.

Step 1: Project Setup

mkdir mcp-notes-server
cd mcp-notes-server
npm init -y
npm install @modelcontextprotocol/sdk zod

Note: The @modelcontextprotocol/sdk package is ESM-only. You will need to set "type": "module" in your package.json, or use dynamic import() calls if you want to stay in CommonJS. For this example, we will use dynamic imports inside a CommonJS entry point to keep the var, function(), require() style as much as possible.

Step 2: Create the Server

Create a file called server.js:

// server.js -- MCP Notes Server
// Uses dynamic import() for ESM-only SDK package

var process = require("process");

async function main() {
  // Dynamic imports for ESM-only packages
  var sdkModule = await import("@modelcontextprotocol/sdk/server/mcp.js");
  var transportModule = await import("@modelcontextprotocol/sdk/server/stdio.js");
  var zod = await import("zod");

  var McpServer = sdkModule.McpServer;
  var ResourceTemplate = sdkModule.ResourceTemplate;
  var StdioServerTransport = transportModule.StdioServerTransport;
  var z = zod.z;

  // In-memory notes storage
  var notes = {};
  var nextId = 1;

  // Create the MCP server instance
  var server = new McpServer({
    name: "notes-server",
    version: "1.0.0"
  });

  // ──────────────────────────────────────────────
  // TOOL: add_note
  // ──────────────────────────────────────────────
  server.registerTool("add_note", {
    title: "Add Note",
    description: "Create a new note with a title and body. Returns the note ID.",
    inputSchema: {
      title: z.string().describe("The title of the note"),
      body: z.string().describe("The content of the note")
    }
  }, function(params) {
    var id = String(nextId);
    nextId = nextId + 1;

    notes[id] = {
      id: id,
      title: params.title,
      body: params.body,
      createdAt: new Date().toISOString()
    };

    return {
      content: [
        {
          type: "text",
          text: "Note created with ID: " + id
        }
      ]
    };
  });

  // ──────────────────────────────────────────────
  // RESOURCE: note://{noteId}
  // ──────────────────────────────────────────────
  server.registerResource(
    "note",
    new ResourceTemplate("note://{noteId}", { list: undefined }),
    {
      title: "Note",
      description: "Retrieve a note by its ID",
      mimeType: "application/json"
    },
    function(uri, params) {
      var noteId = params.noteId;
      var note = notes[noteId];

      if (!note) {
        return {
          contents: [
            {
              uri: uri.href,
              mimeType: "application/json",
              text: JSON.stringify({ error: "Note not found" })
            }
          ]
        };
      }

      return {
        contents: [
          {
            uri: uri.href,
            mimeType: "application/json",
            text: JSON.stringify(note, null, 2)
          }
        ]
      };
    }
  );

  // ──────────────────────────────────────────────
  // PROMPT: summarize_notes
  // ──────────────────────────────────────────────
  server.registerPrompt("summarize_notes", {
    title: "Summarize Notes",
    description: "Generate a summary of all current notes",
    argsSchema: {
      style: z.enum(["brief", "detailed"]).describe(
        "Summary style: brief for bullet points, detailed for full paragraphs"
      )
    }
  }, function(params) {
    var noteList = Object.values(notes);
    var noteText;

    if (noteList.length === 0) {
      noteText = "No notes exist yet.";
    } else {
      noteText = noteList.map(function(n) {
        return "- [" + n.id + "] " + n.title + ": " + n.body;
      }).join("\n");
    }

    var instruction = params.style === "brief"
      ? "Provide a brief bullet-point summary of these notes."
      : "Provide a detailed paragraph summary of each note.";

    return {
      messages: [
        {
          role: "user",
          content: {
            type: "text",
            text: instruction + "\n\nNotes:\n" + noteText
          }
        }
      ]
    };
  });

  // ──────────────────────────────────────────────
  // Connect transport and start
  // ──────────────────────────────────────────────
  var transport = new StdioServerTransport();
  await server.connect(transport);

  process.stderr.write("Notes MCP server running on stdio\n");
}

main().catch(function(err) {
  process.stderr.write("Fatal error: " + err.message + "\n");
  process.exit(1);
});

Step 3: Understanding Each Part

Server creation. The McpServer constructor takes a name and version. These are sent to the client during initialization so it knows what server it is talking to. The SDK handles all capability negotiation automatically based on what you register.

Tool registration. registerTool takes three arguments: a name string, a configuration object with title, description, and inputSchema (using Zod schemas for validation), and a handler function. The handler receives validated parameters and must return an object with a content array. Each content item has a type (usually "text") and the actual data.

Resource registration. registerResource takes a name, a ResourceTemplate with a URI pattern, a configuration object, and a handler. The URI template note://{noteId} means the client can request any URI matching that pattern, and the noteId variable is extracted and passed to your handler.

Prompt registration. registerPrompt takes a name, a configuration with argsSchema, and a handler that returns messages. The messages array follows the standard chat message format with role and content.

Transport connection. StdioServerTransport reads from stdin and writes to stdout. The server.connect(transport) call starts listening for incoming JSON-RPC messages. Log output goes to stderr so it does not interfere with the protocol messages on stdout.

Step 4: Testing with MCP Inspector

The easiest way to test your server without configuring a full host is the MCP Inspector:

npx @modelcontextprotocol/inspector node server.js

This opens a web-based UI where you can:

  1. See the server's capabilities after initialization
  2. List all registered tools, resources, and prompts
  3. Call tools with custom arguments and see the results
  4. Read resources by URI
  5. Get prompts with arguments

Step 5: Configuring with Claude Desktop

To use your server with Claude Desktop, add it to the Claude Desktop configuration file.

On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json On Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "notes": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-notes-server/server.js"]
    }
  }
}

Restart Claude Desktop, and you will see the notes server's tool available in the conversation. You can ask Claude to create notes, and it will call the add_note tool.

Step 6: Building a Simple Client (for Testing)

If you want to test programmatically without Claude Desktop, here is a minimal MCP client:

// client.js -- Minimal MCP client for testing
var process = require("process");
var childProcess = require("child_process");

async function main() {
  var clientModule = await import("@modelcontextprotocol/sdk/client/index.js");
  var transportModule = await import("@modelcontextprotocol/sdk/client/stdio.js");

  var Client = clientModule.Client;
  var StdioClientTransport = transportModule.StdioClientTransport;

  var transport = new StdioClientTransport({
    command: "node",
    args: ["server.js"]
  });

  var client = new Client({
    name: "test-client",
    version: "1.0.0"
  });

  await client.connect(transport);

  // List tools
  var toolsResult = await client.listTools();
  console.log("Available tools:");
  toolsResult.tools.forEach(function(tool) {
    console.log("  - " + tool.name + ": " + tool.description);
  });

  // Call the add_note tool
  var callResult = await client.callTool({
    name: "add_note",
    arguments: {
      title: "First Note",
      body: "This is a test note created by the MCP client."
    }
  });
  console.log("\nTool result:", JSON.stringify(callResult, null, 2));

  // Read the resource
  var resource = await client.readResource({
    uri: "note://1"
  });
  console.log("\nResource contents:", JSON.stringify(resource, null, 2));

  // Get the prompt
  var prompt = await client.getPrompt({
    name: "summarize_notes",
    arguments: { style: "brief" }
  });
  console.log("\nPrompt messages:", JSON.stringify(prompt, null, 2));

  await client.close();
}

main().catch(function(err) {
  console.error("Error:", err.message);
  process.exit(1);
});

Run it:

node client.js

Expected output:

Available tools:
  - add_note: Create a new note with a title and body. Returns the note ID.

Tool result: {
  "content": [
    {
      "type": "text",
      "text": "Note created with ID: 1"
    }
  ]
}

Resource contents: {
  "contents": [
    {
      "uri": "note://1",
      "mimeType": "application/json",
      "text": "{\n  \"id\": \"1\",\n  \"title\": \"First Note\",\n  \"body\": \"This is a test note created by the MCP client.\",\n  \"createdAt\": \"2026-02-08T10:30:00.000Z\"\n}"
    }
  ]
}

Prompt messages: {
  "messages": [
    {
      "role": "user",
      "content": {
        "type": "text",
        "text": "Provide a brief bullet-point summary of these notes.\n\nNotes:\n- [1] First Note: This is a test note created by the MCP client."
      }
    }
  ]
}

Common Issues and Troubleshooting

1. "Error: Cannot find module '@modelcontextprotocol/sdk/server/mcp.js'"

Error [ERR_MODULE_NOT_FOUND]: Cannot find module '@modelcontextprotocol/sdk/server/mcp.js'

This happens when Node.js cannot resolve the SDK's ESM exports. Make sure you are using dynamic import() in CommonJS files, not require(). The SDK is published as ESM-only. Also verify you have the package installed: npm ls @modelcontextprotocol/sdk.

2. "SyntaxError: Cannot use import statement in a module"

SyntaxError: Cannot use import statement in a module
    at wrapSafe (internal/modules/cjs/loader.js:915:16)

You are mixing ESM import syntax inside a CommonJS file. Either add "type": "module" to your package.json or use dynamic import() as shown in the example above.

3. Server starts but Claude Desktop does not see it

[MCP Error] Failed to connect to server "notes": spawn node ENOENT

Claude Desktop cannot find the node binary. Use an absolute path to the Node.js binary in your configuration:

{
  "mcpServers": {
    "notes": {
      "command": "/usr/local/bin/node",
      "args": ["/absolute/path/to/server.js"]
    }
  }
}

Also make sure you use absolute paths for the server script, not relative paths. Claude Desktop does not run from your project directory.

4. "Protocol version mismatch" during initialization

Error: Server protocol version "2024-11-05" is not supported. Client requires "2025-11-25".

Your SDK version is outdated. Update to the latest:

npm install @modelcontextprotocol/sdk@latest

The client and server must agree on a protocol version during initialization. Older servers may not support newer protocol features.

5. Tools show up but the model never calls them

This is almost always a description problem. The model reads the tool description to decide whether to use it. If your description is vague ("Does stuff with notes") or misleading, the model will not know when to use the tool. Write descriptions that clearly state what the tool does, what it returns, and when it should be used.

6. "Error: Transport closed" when calling tools

Error: MCP transport closed unexpectedly

Your server process crashed. Check stderr output for errors. Common causes include unhandled promise rejections, missing dependencies, or the server writing non-JSON-RPC output to stdout (which corrupts the protocol stream). All logging must go to stderr, never stdout.

Best Practices

  • Write clear, specific tool descriptions. The model uses these to decide when and how to call your tools. Include what the tool does, what it returns, and any constraints. "Query a PostgreSQL database with a SQL SELECT statement and return the results as JSON rows. Only supports SELECT queries, not INSERT/UPDATE/DELETE." is far better than "Run a database query."

  • Validate all inputs with schemas. Define inputSchema with Zod or JSON Schema for every tool. This gives the model structured guidance on what parameters to provide and catches invalid inputs before they reach your logic.

  • Return structured, predictable output. Always return content in the standard content array format. Use type: "text" for most responses. For errors, return an isError: true flag along with a descriptive error message instead of throwing exceptions.

  • Keep tools focused and granular. One tool should do one thing well. Instead of a single "manage_database" tool with a mode parameter, create separate "query_database", "insert_record", and "delete_record" tools. This helps the model make better decisions about which tool to use.

  • Log to stderr, never stdout. In stdio transport, stdout is the protocol channel. Any stray console.log or debug output written to stdout will corrupt the JSON-RPC stream and crash the session. Use process.stderr.write() or redirect your logging framework to stderr.

  • Handle errors gracefully inside tool handlers. Wrap your tool logic in try/catch blocks and return meaningful error messages. An unhandled exception in a tool handler can crash the entire server process and kill all active sessions.

  • Use resource templates for parameterized data. Instead of registering a separate static resource for every database record, use a URI template like db://users/{userId}. This lets the client request any record without your server needing to enumerate them all upfront.

  • Version your server. The version field in McpServer is sent to clients during initialization. Use semantic versioning so clients can detect upgrades and potential breaking changes.

  • Test with the MCP Inspector first. Before integrating with Claude Desktop or any other host, use npx @modelcontextprotocol/inspector to verify your tools, resources, and prompts work correctly. It is much easier to debug in the inspector than in a full AI conversation.

  • Keep server startup fast. The host launches your server as a subprocess and waits for initialization. If your server takes 10 seconds to start because it is connecting to a database and loading configuration, the user experience suffers. Defer heavy initialization until the first tool call if possible.

References

Powered by Contentful