Building MCP Servers with Python
Building MCP Servers with Python
The Model Context Protocol (MCP) enables AI systems to maintain persistent memory across conversations. This guide shows you how to build an MCP server from scratch using Python.
What is MCP?
MCP is an open protocol that allows AI assistants to connect to external data sources and tools. Think of it as a universal adapter between AI models and your data.
Key Benefits
- Persistent memory across conversations
- Standardized protocol (works with any AI provider)
- Simple Python implementation
- Built-in support for common operations (CRUD, search)
Prerequisites
Before building your MCP server, make sure you have:
- Python 3.10 or higher
- Basic understanding of async/await
- Familiarity with JSON APIs
- An AI assistant that supports MCP (Claude, ChatGPT, etc.)
Project Setup
First, create a new project directory and install dependencies:
mkdir my-mcp-server cd my-mcp-server python -m venv venv source venv/bin/activate pip install
mcp
Basic Server Structure
Create a file called{' '}
server.py
:
from mcp.server import Server
from mcp.types import Tool, TextContent
import json
server = Server("my-mcp-server")
@server.list_tools()
async def list_tools() -> list[Tool]:
"""List available tools."""
return [
Tool(
name="save_note",
description="Save a note to memory",
inputSchema={
"type": "object",
"properties": {
"title": {"type": "string"},
"content": {"type": "string"}
},
"required": ["title", "content"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: str) -> str:
"""Handle tool calls."""
if name == "save_note":
data = json.loads(arguments)
# Save note to database/file
return f"Saved note: {data['title']}"
return f"Unknown tool: {name}"
async def main():
async with server.run() as running:
await running.wait_until_shutdown()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
This basic server provides a{' '}
save_note
{' '}
tool that AI assistants can call to save information to memory.
Adding Resources
Resources provide read-only access to data. Add this to your server:
@server.list_resources()
async def list_resources() -> list[Resource]:
"""List available resources."""
return [
Resource(
uri="memory:///notes",
name="Saved Notes",
description="All notes stored in memory",
mimeType="application/json"
)
]
@server.read_resource()
async def read_resource(uri: str) -> str:
"""Read a resource."""
if uri == "memory:///notes":
# Return all notes as JSON
notes = load_notes_from_database()
return json.dumps(notes)
raise ValueError(f"Unknown resource: {uri}")
Testing Your Server
Test your server locally before connecting to AI assistants:
# Start your server
python server.py
# In another terminal, test it
echo '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}' | nc localhost 8000
Connecting to AI Assistants
Once your server is running locally, you can connect it to AI assistants like Claude:
-
Start your MCP server:{' '}
python server.py -
Configure Claude Desktop to connect to{' '}
stdio://localhost:8000 -
Start a new conversation and test the{' '}
save_note{' '} tool
Best Practices
Error Handling
Always handle errors gracefully:
@server.call_tool()
async def call_tool(name: str, arguments: str) -> str:
try:
# Parse and validate arguments
data = json.loads(arguments)
# Execute tool logic
result = await execute_tool(name, data)
return json.dumps(result)
except json.JSONDecodeError as e:
return json.dumps({"error": f"Invalid JSON: {str(e)}"})
except Exception as e:
return json.dumps({"error": f"Tool error: {str(e)}"})
Logging
Add logging to debug issues:
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@server.call_tool()
async def call_tool(name: str, arguments: str) -> str:
logger.info(f"Tool called: {name}")
logger.debug(f"Arguments: {arguments}")
# ... rest of implementation
Advanced Features
Prompts
Define prompts that guide AI behavior:
server.set_prompt( "You are a helpful memory assistant. " "Use the save_note tool to store
important information." )
Sampling
Control AI creativity and determinism:
@server.call_tool()
async def call_tool(name: str, arguments: str | None) -> str:
# Parse arguments
data = json.loads(arguments) if arguments else {}
# Execute with sampling control
result = await execute_with_sampling(name, data)
return json.dumps(result)
Deployment
Deploy your MCP server for production use:
Option 1: Cloud Run
Package your server and deploy to Google Cloud Run:
# Package your server gcloud builds submit --tag gcr.io/my-project/mcp-server # Deploy to Cloud
Run gcloud run deploy mcp-server \ --image gcr.io/my-project/mcp-server \ --platform managed \
--region us-central1
Option 2: Railway
Deploy to Railway with a simple configuration:
# railway.json
{
"build": {
"dockerContext": ".",
"dockerfile": "Dockerfile"
}
}
Conclusion
Building MCP servers with Python is straightforward and powerful. The combination of MCP's standardized protocol and Python's simplicity makes it easy to create AI-powered memory systems.
Ready to build something? Check out the{' '} official MCP documentation {' '} for more details.
Next Steps:
- Add database persistence (SQLite, PostgreSQL)
- Implement semantic search with embeddings
- Add user authentication and multi-tenancy
- Create a web dashboard for memory management
Related Articles
Introduction to the Ethical AI Stack
How four interconnected AI projects form a comprehensive framework for ethical AI development, from philosophy to application.
Full-Stack AI Development in 2025
Lessons learned from building 84K LOC of Next.js + Python AI infrastructure. Technical deep dive into architecture, decisions, and career perspective.