MCP Server: Ollama Deep Researcher logo

MCP Server: Ollama Deep Researcher

by MCP-Mirror

This is an MCP server adaptation of LangChain Ollama Deep Researcher, providing deep research capabilities as MCP tools. It allows AI assistants to perform in-depth research on topics using local LLMs via Ollama within the model context protocol ecosystem.

View on GitHub

Last updated: N/A

What is MCP Server: Ollama Deep Researcher?

The MCP Server: Ollama Deep Researcher is a server that provides deep research capabilities as MCP tools, allowing AI assistants to perform in-depth research on topics using local LLMs hosted by Ollama. It leverages web search, summarization, and iterative refinement to produce comprehensive research summaries.

How to use MCP Server: Ollama Deep Researcher?

To use this server, you need to install Node.js, Python, and Ollama. You can install it either through standard installation (cloning the repository and installing dependencies) or using Docker. After installation, configure your MCP client (Claude Desktop App or Cline VS Code Extension) to connect to the server, providing necessary API keys (Tavily, Perplexity, LangSmith). Then, you can use the 'research' tool with a topic to initiate the research process.

Key features of MCP Server: Ollama Deep Researcher

  • Performs iterative web research using LLMs via Ollama

  • Integrates with Tavily and Perplexity APIs for web search

  • Provides research results as MCP resources for persistent access

  • Supports tracing and monitoring via LangSmith

  • Offers configurable research parameters (maxLoops, llmModel, searchApi)

Use cases of MCP Server: Ollama Deep Researcher

  • Enabling AI assistants to conduct in-depth research on various topics

  • Generating comprehensive summaries of complex subjects

  • Identifying knowledge gaps and iteratively refining research queries

  • Providing citations to all sources used during the research process

FAQ from MCP Server: Ollama Deep Researcher

How do I configure the server to use a specific LLM?

You can configure the llmModel parameter in the configure tool or through the MCP client configuration file. Specify the name of the Ollama model you want to use (e.g., 'deepseek-r1:8b').

What API keys are required to run the server?

You need API keys for Tavily or Perplexity (for web search) and LangSmith (for tracing and monitoring). These keys should be set in the .env file or the MCP client configuration.

How do I access the research results?

Research results are stored as MCP resources and can be accessed via research://{topic} URIs. They also appear in the MCP client's resource panel.

How can I monitor the research process?

The server integrates with LangSmith for tracing and monitoring. You can access detailed traces, performance metrics, and debugging information at https://smith.langchain.com under your configured project name.

What if I encounter Ollama connection issues?

Ensure Ollama is running by executing ollama list in your terminal. Try running Ollama in terminal mode by closing the app and executing ollama serve. Check if Ollama is accessible at localhost:11434, 0.0.0.0:11434, or 127.0.0.1:11434.