MCP-LLM Bridge
by patruff
The MCP-LLM Bridge is a TypeScript implementation that connects local LLMs (via Ollama) to Model Context Protocol (MCP) servers. This allows open-source models to use the same tools and capabilities as Claude, enabling powerful local AI assistants.
Last updated: N/A
What is MCP-LLM Bridge?
The MCP-LLM Bridge is a project that bridges local Large Language Models with MCP servers, enabling them to utilize various capabilities such as filesystem operations, web search, GitHub interactions, Google Drive & Gmail integration, memory/storage, and image generation. It translates between the LLM's outputs and the MCP's JSON-RPC protocol, allowing any Ollama-compatible model to use these tools just like Claude does.
How to use MCP-LLM Bridge?
To use the bridge, first install Ollama and the required model (e.g., qwen2.5-coder:7b-instruct). Then, install the necessary MCP servers using npm. Configure credentials for services like Brave Search, GitHub, and Flux. Configure the bridge_config.json
file with MCP server definitions and LLM settings. Finally, start the bridge using npm run start
and interact with it using commands like list-tools
or by sending regular text prompts.
Key features of MCP-LLM Bridge
Multi-MCP support with dynamic tool routing
Structured output validation for tool calls
Automatic tool detection from user prompts
Robust process management for Ollama
Detailed logging and error handling
Use cases of MCP-LLM Bridge
Filesystem manipulation using local LLMs
Web search and research with open-source models
Email and document management through local AI
Code and GitHub interactions with local LLMs
Image generation using local AI
FAQ from MCP-LLM Bridge
What LLM does this currently use?
What LLM does this currently use?
It currently uses Qwen 2.5 7B (qwen2.5-coder:7b-instruct) through Ollama.
What MCP servers are currently supported?
What MCP servers are currently supported?
Currently supported MCP servers include Filesystem operations, Brave Search, GitHub, Memory, Flux image generation, and Gmail & Drive.
How does the bridge detect which tool to use?
How does the bridge detect which tool to use?
The bridge includes smart tool detection based on user input, such as email addresses, file/folder keywords, and contextual routing.
What configuration is needed?
What configuration is needed?
The bridge is configured through bridge_config.json
, which includes MCP server definitions, LLM settings, and tool permissions.
What are the future improvements planned?
What are the future improvements planned?
Future improvements include adding support for more MCPs, implementing parallel tool execution, adding streaming responses, enhancing error recovery, adding conversation memory, and supporting more Ollama models.