ollama-MCP-server
by MCP-Mirror
This MCP server enables seamless integration between local Ollama LLM instances and MCP-compatible applications, providing advanced task decomposition, evaluation, and workflow management. It facilitates standardized communication via the MCP protocol.
Last updated: N/A
What is ollama-MCP-server?
The ollama-MCP-server is a Model Context Protocol (MCP) server that communicates with Ollama. It allows MCP-compatible applications to interact with local Ollama LLM instances for advanced task decomposition, evaluation, and workflow management.
How to use ollama-MCP-server?
- Install the server using
pip install ollama-mcp-server
. 2. Configure the server using environment variables (e.g.,OLLAMA_HOST
,DEFAULT_MODEL
). 3. Set up Ollama with the desired models. 4. Configure your MCP client (e.g., Claude Desktop) to use the server. 5. Use the provided tools (e.g.,decompose-task
,evaluate-result
,run-model
) via the MCP protocol.
Key features of ollama-MCP-server
Task decomposition for complex problems
Result evaluation and validation
Ollama model management and execution
Standardized communication via MCP protocol
Enhanced error handling with detailed messages
Performance optimizations (connection pooling, LRU cache)
Use cases of ollama-MCP-server
Decomposing complex tasks into manageable subtasks
Evaluating the results of tasks against specific criteria
Running Ollama models with specified parameters
Integrating LLMs into MCP-compatible applications
Managing and orchestrating LLM workflows
FAQ from ollama-MCP-server
What is the Model Context Protocol (MCP)?
What is the Model Context Protocol (MCP)?
The Model Context Protocol (MCP) is a standard for communication between applications and language models, enabling advanced features like task decomposition and evaluation.
How do I specify which Ollama model to use?
How do I specify which Ollama model to use?
Models are specified in the following order of precedence: tool call parameters, MCP configuration file, environment variables (OLLAMA_DEFAULT_MODEL
), and a default value (llama3
).
What are the available resources?
What are the available resources?
The server implements the following resources: task://
for individual tasks, result://
for evaluation results, and model://
for available Ollama models.
How do I run the tests?
How do I run the tests?
Run all tests with python -m unittest discover
. Run specific tests with python -m unittest tests.test_integration
.
How do I contribute to the project?
How do I contribute to the project?
Fork the repository, create a feature branch, commit your changes, push to the branch, and open a pull request.