MCP-Bridge
by SecretiveShell
MCP-Bridge acts as a bridge between the OpenAI API and MCP tools, allowing developers to leverage MCP tools through the OpenAI API interface. It provides endpoints to interact with MCP tools in a way compatible with the OpenAI API.
Last updated: N/A
What is MCP-Bridge?
MCP-Bridge is designed to facilitate the integration of MCP tools with the OpenAI API. It provides a set of endpoints that can be used to interact with MCP tools in a way that is compatible with the OpenAI API, allowing you to use any client with any MCP tool without explicit support for MCP.
How to use MCP-Bridge?
The recommended way to install MCP-Bridge is to use Docker. Alternatively, you can install it manually. Once installed, you can interact with it using the OpenAI API. View the documentation at http://yourserver:8000/docs. MCP-Bridge also provides an SSE bridge for external clients.
Key features of MCP-Bridge
Non-streaming chat completions with MCP
Streaming chat completions with MCP
Non-streaming completions without MCP
MCP tools support
MCP sampling
SSE Bridge for external clients
Use cases of MCP-Bridge
Integrating MCP tools with OpenAI API
Using OpenAI API clients with MCP tools
Outsourcing MCP server complexity
Enabling external chat apps with explicit MCP support to use MCP-Bridge as an MCP server
Testing MCP configurations
FAQ from MCP-Bridge
How do I add new MCP servers?
How do I add new MCP servers?
Edit the config.json file.
How do I enable API key authentication?
How do I enable API key authentication?
Add an api_key
field to your config.json file.
What is the recommended installation method?
What is the recommended installation method?
The recommended way to install MCP-Bridge is to use Docker.
Where can I find documentation?
Where can I find documentation?
View the documentation at http://yourserver:8000/docs or here.
How does MCP-Bridge work?
How does MCP-Bridge work?
The application sits between the OpenAI API and the inference engine. An incoming request is modified to include tool definitions for all MCP tools available on the MCP servers. The request is then forwarded to the inference engine, which uses the tool definitions to create tool calls. MCP bridge then manage the calls to the tools. The request is then modified to include the tool call results, and is returned to the inference engine again so the LLM can create a response. Finally, the response is returned to the OpenAI API.