MCP-RAG
by hulk-pham
This is a demo of a Retrieval-Augmented Generation (RAG) application integrated with an MCP server. It allows users to ask questions about a company using context-aware prompts and document retrieval.
Last updated: N/A
What is MCP-RAG?
This is a Retrieval-Augmented Generation (RAG) application that integrates with an MCP server to provide context-aware answers to questions based on retrieved documents.
How to use MCP-RAG?
Connect to the MCP server using Claude Desktop, Cursor, or your preferred IDE. Use the process_query
tool to ask questions about the company after setting up your OpenAI API key in the .env file and installing the required packages using pip install -r requirements.txt
.
Key features of MCP-RAG
MCP server integration
Document retrieval using vector search with ChromaDB
Context-aware prompt generation
Integration with LLM APIs
Use cases of MCP-RAG
Answering questions about company information
Providing context-aware responses based on documents
Improving LLM accuracy with retrieved knowledge
Integrating RAG capabilities into existing applications
FAQ from MCP-RAG
What is RAG?
What is RAG?
Retrieval-Augmented Generation is a technique that combines information retrieval with text generation to improve the accuracy and relevance of LLM outputs.
What is MCP?
What is MCP?
The README doesn't specify what MCP is, but it is a server that this application integrates with.
How do I install the application?
How do I install the application?
Run pip install -r requirements.txt
to install the necessary dependencies.
How do I configure the application?
How do I configure the application?
Set the OPENAI_API_KEY
environment variable in the .env
file.
What LLM APIs are supported?
What LLM APIs are supported?
The README doesn't specify which LLM APIs are supported, but it integrates with LLM APIs in general.