Multi-Model Advisor logo

Multi-Model Advisor

by YuChenSSR

The Multi-Model Advisor is a Model Context Protocol (MCP) server that leverages multiple Ollama models to provide diverse AI perspectives on a single question. It creates a "council of advisors" approach where a synthesizing AI can combine multiple viewpoints for more comprehensive answers.

View on GitHub

Last updated: N/A

What is Multi-Model Advisor?

The Multi-Model Advisor is an MCP server that queries multiple Ollama models and combines their responses, offering diverse AI perspectives. It enables a 'council of advisors' approach, allowing a synthesizing AI like Claude to provide more comprehensive answers by considering multiple viewpoints.

How to use Multi-Model Advisor?

First, install the server via Smithery or manually by cloning the repository, installing dependencies, and building the project. Configure the server by creating a .env file with your desired settings, including Ollama API URL, default models, and system prompts for each model. Finally, connect the server to Claude for Desktop by modifying the claude_desktop_config.json file and restarting Claude. You can then query the server through Claude, specifying the models to use.

Key features of Multi-Model Advisor

  • Query multiple Ollama models with a single question

  • Assign different roles/personas to each model

  • View all available Ollama models on your system

  • Customize system prompts for each model

  • Configure via environment variables

  • Integrate seamlessly with Claude for Desktop

Use cases of Multi-Model Advisor

  • Generating diverse perspectives on a complex problem

  • Brainstorming new ideas with multiple AI assistants

  • Getting balanced advice from AI assistants with different personalities

  • Synthesizing information from multiple sources to create a comprehensive answer

FAQ from Multi-Model Advisor

How do I fix Ollama connection issues?

Ensure Ollama is running (ollama serve), check the OLLAMA_API_URL in your .env file, and verify Ollama is responding by accessing http://localhost:11434 in your browser.

What do I do if a model is reported as unavailable?

Check that you've pulled the model using ollama pull <model-name>, verify the exact model name using ollama list, and use the list-available-models tool to see all available models.

Why are the MCP tools not showing up in Claude?

Ensure you've restarted Claude after updating the configuration, check the absolute path in claude_desktop_config.json is correct, and look at Claude's logs for error messages.

What if I run out of RAM?

Some managers' AI models may have chosen larger models, but there is not enough memory to run them. You can try specifying a smaller model (see the Basic Usage) or upgrading the memory.

How do I list available models?

You can see all available models on your system by asking Claude: Show me which Ollama models are available on my system