MCP Router Server
by r2r0garza
The MCP Router Server is a FastAPI-based server that routes requests to multiple LLM providers via a unified REST API. It simplifies switching between providers and offers a consistent interface for interacting with various LLMs.
Last updated: N/A
What is MCP Router Server?
The MCP Router Server is a REST API that acts as a unified interface for interacting with multiple Large Language Model (LLM) providers. It abstracts away the complexities of each provider, allowing users to easily switch between them without modifying their code.
How to use MCP Router Server?
- Install dependencies using
pip install -r requirements.txt
. 2. Copy and edit the.env
file to configure your provider keys and options. 3. Run the server usinguvicorn app.main:app --reload
. 4. Test the health check endpoint at/health
. 5. Send POST requests to the/ask
endpoint with a JSON payload containing identity, memory, tools, docs, and extra information.
Key features of MCP Router Server
Switch LLM providers by editing
.env
/ask endpoint for context-aware chat requests
Health check at
/health
Provider abstraction for OpenAI, LM Studio, OpenRouter, Ollama, Anthropic Claude, Azure Foundry AI
Easy deployment and configuration
Use cases of MCP Router Server
Building chatbots that can leverage multiple LLMs for different tasks
Creating a unified API for accessing various LLM providers
Experimenting with different LLMs to find the best one for a specific use case
Developing applications that require failover capabilities between LLM providers
FAQ from MCP Router Server
What LLM providers are supported?
What LLM providers are supported?
The server supports OpenAI, LM Studio, OpenRouter, Ollama, Anthropic Claude, and Azure Foundry AI.
How do I configure the server to use a specific provider?
How do I configure the server to use a specific provider?
You can configure the server by editing the .env
file and setting the appropriate provider keys and options.
What is the /ask
endpoint used for?
What is the /ask
endpoint used for?
The /ask
endpoint is used for sending context-aware chat requests to the server. It accepts a JSON payload containing identity, memory, tools, docs, and extra information.
How do I deploy the server?
How do I deploy the server?
The server can be easily deployed using uvicorn. Refer to the Quickstart section in the README for detailed instructions.
What is the license for this project?
What is the license for this project?
The project is licensed under the MIT License.