RagDocs MCP Server
by heltonteixeira
RagDocs MCP Server is a Model Context Protocol server that provides RAG (Retrieval-Augmented Generation) capabilities. It uses Qdrant vector database and Ollama/OpenAI embeddings to enable semantic search and management of documentation through vector similarity.
Last updated: N/A
What is RagDocs MCP Server?
RagDocs MCP Server is a server that provides Retrieval-Augmented Generation (RAG) capabilities. It allows you to add, search, list, and delete documentation, leveraging vector embeddings for semantic similarity.
How to use RagDocs MCP Server?
- Install the server using
npm install -g @mcpservers/ragdocs
. 2. Configure the server with your Qdrant instance URL and embedding provider (Ollama or OpenAI) in your MCP server configuration. 3. Use the available tools (add_document, search_documents, list_documents, delete_document) to manage and query your documentation.
Key features of RagDocs MCP Server
Add documentation with metadata
Semantic search through documents
List and organize documentation
Delete documents
Support for both Ollama (free) and OpenAI (paid) embeddings
Automatic text chunking and embedding generation
Vector storage with Qdrant
Use cases of RagDocs MCP Server
Creating a searchable knowledge base for internal documentation
Building a chatbot that can answer questions based on a collection of documents
Improving the accuracy of search results by using semantic similarity
Organizing and managing large collections of documents
FAQ from RagDocs MCP Server
What is the default embedding provider?
What is the default embedding provider?
The default embedding provider is Ollama, which is free to use.
Do I need an API key to use Qdrant?
Do I need an API key to use Qdrant?
You only need an API key if you are using Qdrant Cloud. For local instances, you do not need an API key.
What are the prerequisites for running RagDocs?
What are the prerequisites for running RagDocs?
You need Node.js 16 or higher, a Qdrant instance (local or cloud), and either Ollama or an OpenAI API key.
How do I configure the server to use OpenAI?
How do I configure the server to use OpenAI?
Set the EMBEDDING_PROVIDER
environment variable to openai
and provide your OpenAI API key in the OPENAI_API_KEY
environment variable.
What is the default embedding model used by Ollama?
What is the default embedding model used by Ollama?
The default embedding model used by Ollama is nomic-embed-text
.