This repository provides a Model Context Protocol (MCP) implementation, demonstrating local and remote tool calling with examples, schema validation, and integration with AWS Bedrock, showcasing AI-assisted development. This repository provides
A reference implementation of the Model Context Protocol (MCP) for tool calling between LLMs and applications.
This project implements both a client and server for the Model Context Protocol, demonstrating local tool calling via stdio, remote tool calling via HTTP, and integration with AWS Bedrock and Claude 3.7. The MCP defines standard formats for function definitions, calls, and responses, facilitating interaction between LLMs and external tools.
This project was developed with the assistance of Claude 3.7. The llm/
directory contains design specifications, implementation plans, technical decisions, and development progress, offering insights into the AI-assisted development process.
mcp-example/
⌘⌘⌘ core/ # Core protocol implementation
⌘ ⌘⌘⌘ schema.py # Protocol schema definitions
⌘ ⌘⌘⌘ validation.py # Schema validation utilities
⌘ ⌘⌘⌘ registry.py # Tool registry
⌘ ⌘⌘⌘ executor.py # Tool executor
⌘⌘⌘ tools/ # Tool implementations
⌘ ⌘⌘⌘ calculator.py # Calculator tool
⌘ ⌘⌘⌘ text.py # Text processing tool
⌘⌘⌘ adapters/ # Interface adapters
⌘ ⌘⌘⌘ stdio/ # Command-line interface
⌘ ⌘⌘⌘ http/ # HTTP client for remote servers
⌘⌘⌘ server/ # Server implementation
⌘ ⌘⌘⌘ app.py # FastAPI server
⌘ ⌘⌘⌘ main.py # Server runner
⌘⌘⌘ examples/ # Usage examples
⌘⌘⌘ tests/ # Test suite
⌘⌘⌘ llm/ # Implementation documentation
git clone https://github.com/yourusername/mcp-example.git
cd mcp-example
poetry install
Option 2: Using venv:python3 -m venv venv
source venv/bin/activate # On Windows, use: venv\Scripts\activate
pip install -e .
The command-line interface provides a way to interact with tools locally:
# With Poetry
poetry run python -m mcp_example.adapters.stdio.cli
# With venv
python -m mcp_example.adapters.stdio.cli
This will start a REPL where you can:
list
command{"name": "calculator", "parameters": {"operation": "add", "a": 5, "b": 3}}
help
commandThe FastAPI server provides a remote API for tool calling:
# With Poetry
poetry run python -m mcp_example.server.main --host 0.0.0.0 --port 8000
# With venv
python -m mcp_example.server.main --host 0.0.0.0 --port 8000
By default, this starts a server on http://127.0.0.1:8000
. You can access API documentation at http://127.0.0.1:8000/docs
.
Server options:
--host
: Host to bind to (default: 127.0.0.1)--port
: Port to listen on (default: 8000)--reload
: Enable auto-reload for development--log-level
: Set logging level (debug, info, warning, error)Once the server is running, you can test it using curl:
curl -X GET http://localhost:8000/api/functions -H "X-API-Key: test-key"
curl -X POST http://localhost:8000/api/functions/call \
-H "X-API-Key: test-key" \
-H "Content-Type: application/json" \
-d '{"name": "calculator", "parameters": {"operation": "add", "a": 5, "b": 3}}'
curl -X POST http://localhost:8000/api/functions/call \
-H "X-API-Key: test-key" \
-H "Content-Type: application/json" \
-d '{"name": "transform_text", "parameters": {"operation": "uppercase", "text": "hello world"}}'
curl -X POST http://localhost:8000/api/functions/call \
-H "X-API-Key: test-key" \
-H "Content-Type: application/json" \
-d '{"name": "analyze_text", "parameters": {"text": "Hello world. This is a test."}}'
If you encounter any issues:
pip list | grep uvicorn # Should show uvicorn is installed
python -m mcp_example.server.main --log-level debug
The server provides the following endpoints:
GET /api/functions
: List all available functionsGET /api/functions/{name}
: Get a specific function definitionPOST /api/functions/call
: Call a functionPOST /api/tools/call
: Call a toolPOST /api/execute
: Execute a function call from textWebSocket /api/functions/stream
: Stream function resultsWebSocket /api/tools/stream
: Stream tool resultsTo call the server from a Python application:
from mcp_example.adapters.http.client import MCPClient
# Create client
client = MCPClient(
base_url="http://localhost:8000",
api_key="test-key" # Use the default test key
)
# List available functions
functions = client.list_functions()
for func in functions:
print(f"{func.name}: {func.description}")
# Call a function
response = client.call_function(
name="calculator",
parameters={"operation": "add", "a": 5, "b": 3}
)
print(f"Result: {response.result}")
The MCP implementation supports streaming results from long-running operations using WebSockets. This is particularly useful for:
The AsyncMCPClient provides methods for streaming function and tool results:
import asyncio
from mcp_example.adapters.http.client import AsyncMCPClient
async def main():
# Create async client
client = AsyncMCPClient("http://localhost:8000", api_key="test-key")
# Stream results from a long-running function
print("Streaming function results:")
async for chunk in client.stream_function(
name="long_running_operation",
parameters={"duration": 5}
):
# Process each chunk as it arrives
if chunk.status == "in_progress":
print(f"Progress: {chunk.result}")
elif chunk.status == "complete":
print(f"Final result: {chunk.result}")
elif chunk.status == "error":
print(f"Error: {chunk.error}")
await client.close()
if __name__ == "__main__":
asyncio.run(main())
Each streaming chunk contains:
id
: Unique identifier for the chunkstatus
: Status of the operation ("in_progress", "complete", or "error")result
: Partial or final result dataerror
: Error information if status is "error"timestamp
: When the chunk was createdThe HTTP client supports caching of tool and function results to improve performance and reduce redundant network calls. This is particularly useful for idempotent operations or when the same tool is called repeatedly with identical parameters.
To use caching with the HTTP client:
# Create client with caching options
client = MCPClient(
base_url="http://localhost:8000",
api_key="test-key",
cache_enabled=True, # Enable/disable caching (default: True)
cache_max_size=100, # Maximum number of cache entries (default: 100)
cache_ttl=300.0 # Cache time-to-live in seconds (default: 300.0)
)
# First call will hit the server
result1 = client.call_function("calculator.add", {"a": 1, "b": 2})
# Second call with same parameters will use cached result
result2 = client.call_function("calculator.add", {"a": 1, "b": 2})
# Bypass cache for specific calls
result3 = client.call_function("calculator.add", {"a": 1, "b": 2}, use_cache=False)
# Invalidate specific cache entry
client.invalidate_cache_entry("calculator.add", {"a": 1, "b": 2})
# Clear entire cache
client.clear_cache()
Cache behavior:
The MCP implementation includes integration with AWS Bedrock and specifically with Claude 3.7. This allows you to leverage Claude's advanced capabilities for natural language understanding and function calling while using the standard MCP tools.
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-west-2" # or your preferred region
Basic usage example:
from mcp_example.adapters.aws.claude import ClaudeAdapter, ClaudeMessage, ClaudeRole
from mcp_example.core.schema import FunctionDefinition
# Create a Claude adapter
adapter = ClaudeAdapter()
# Create messages for Claude
messages = [
ClaudeMessage(role=ClaudeRole.USER, content="What's 42 + 7?")
]
# Define a calculator function that Claude can call
calculator_function = FunctionDefinition(
name="calculator",
description="Performs arithmetic operations",
parameters={
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["add", "subtract", "multiply", "divide"],
"description": "The operation to perform"
...