MCP-ChatBot
by arkCyber
MCP-ChatBot is a powerful Rust-based chatbot framework leveraging the Model Context Protocol (MCP) for multi-server support and tool integration. It allows seamless switching between AI models and provides a flexible protocol for enhancing AI model interactions.
Last updated: N/A
MCP-ChatBot
A powerful Rust-based chatbot MCP (Model Context Protocol) framework with multi-server support and tool integration capabilities.
Features
- š¤ Multi-AI Support: Seamlessly switch between Ollama (local) and OpenAI
- š ļø Tool Integration: Built-in support for memory, SQLite, and file operations
- š Multi-Server Architecture: Run multiple specialized servers simultaneously
- š¬ Interactive CLI: User-friendly command-line interface with history
- š Customizable Prompts: Server-specific system prompts via YAML configuration
- š Secure: Environment-based API key management
- š RAG Support: Retrieval Augmented Generation with Qdrant vector database
- š¤ Voice Input: Speech-to-text capabilities using Whisper
- š¤ Advanced NLP: Powered by rust-bert for text embeddings and language models
- š§ Deep Learning: PyTorch integration via tch for advanced model operations
- š Text Processing: Efficient tokenization with Hugging Face tokenizers
Model Context Protocol (MCP)
MCP(Model Context Protocol) is a flexible protocol designed to enhance AI model interactions by providing structured context and tool integration capabilities. The protocol enables:
Core Components
-
Context Management
- Dynamic context switching between different AI models
- Context persistence across sessions
- Server-specific context configurations
-
Tool Integration
- Standardized tool interface for AI models
- Automatic tool discovery and registration
- Tool execution with retry mechanisms
- Tool response processing and formatting
-
Server Architecture
- Modular server design for specialized operations
- Inter-server communication protocol
- Resource management and cleanup
- Server-specific prompt configurations
-
Protocol Features
- JSON-based message format
- Asynchronous operation support
- Error handling and recovery
- Resource cleanup and management
- Tool execution monitoring
Protocol Flow
-
Initialization
- Server registration and configuration
- Tool discovery and registration
- Context initialization
-
Operation
- Context-aware tool execution
- Response processing and formatting
- Error handling and recovery
- Resource management
-
Cleanup
- Resource release
- Server shutdown
- Context persistence
Use Cases
- Multi-Model Collaboration: Coordinate multiple AI models for complex tasks
- Tool Integration: Seamlessly integrate external tools and services
- Context Management: Maintain consistent context across different operations
- Resource Management: Efficiently manage system resources and cleanup
Prerequisites
- Rust 1.70 or higher
- Ollama (for local AI support)
- OpenAI API key (optional, for OpenAI support)
- Qdrant vector database (for RAG support)
- Whisper model (for voice input)
AI Models and Tools
The project leverages several powerful AI and NLP tools:
Text Processing and Embeddings
- rust-bert: A Rust implementation of Hugging Face's transformers library
- Provides state-of-the-art text embeddings
- Supports multiple language models
- Enables efficient text processing and understanding
Deep Learning
- tch (PyTorch): Rust bindings for PyTorch
- Enables deep learning model operations
- Supports model inference and training
- Provides GPU acceleration when available
Text Tokenization
- tokenizers: Hugging Face's tokenizers library
- Efficient text tokenization
- Supports multiple tokenization algorithms
- Enables consistent text processing across different models
Installation
- Clone the repository:
git clone https://github.com/arksong/mcp-chatbot.git
cd mcp-chatbot
- Build the project:
cargo build --release
Configuration
- Create a
.env
file in the project root:
LLM_API_KEY=your_ollama_key
OPENAI_API_KEY=your_openai_key # Optional
-
Configure servers in
src/servers_config.json
-
Customize prompts in
mcp_prompts.yaml
Usage
Using Ollama (Local AI)
- Install Ollama:
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.ai/install.sh | sh
- Pull the required model:
ollama pull llama3.2:latest
- Start the Ollama service:
ollama serve
- Run the chatbot:
cargo run
Using OpenAI
- Set your OpenAI API key:
export OPENAI_API_KEY=your_api_key
- Run the chatbot:
cargo run
- Switch to OpenAI using the
/ai
command
Using RAG (Retrieval Augmented Generation)
- Start Qdrant vector database using Docker:
# Using the provided setup script
./scripts/setup_qdrant.sh
# Or manually using Docker
docker run -d \
--name qdrant \
-p 6333:6333 \
-p 6334:6334 \
-v "$(pwd)/qdrant_storage:/qdrant/storage" \
qdrant/qdrant:latest
- Add documents to the RAG database:
# Use the /rag-add command in the chatbot
/rag-add
# Then enter your document text
- Search similar documents:
# Use the /rag-search command
/rag-search
# Enter your search query
- View RAG database information:
/rag-info
- Docker Management Commands:
# Stop Qdrant container
docker stop qdrant
# Start Qdrant container
docker start qdrant
# View Qdrant logs
docker logs qdrant
# Remove Qdrant container (data will be preserved in qdrant_storage)
docker rm qdrant
Note: The Qdrant data is persisted in the ./qdrant_storage
directory, which is mounted as a volume in the Docker container. This ensures your vector data remains intact even if the container is removed.
Using Voice Input
- Start voice recording:
/voice
-
Speak your message (press Enter to stop recording)
-
The transcribed text will be processed as a normal message
Available Commands
/help
- Display help menu/clear
- Clear the terminal screen/usage
- Display usage information/exit
- Exit the program/servers
- List available MCP servers/tools
- List available tools/resources
- List available resources/debug
- Toggle debug logging/ai
- Switch between AI providers/rag-add
- Add a new document to RAG database/rag-search
- Search for similar documents/rag-info
- Show RAG database information/voice
- Start voice input (press Enter to stop recording)
Tool Examples
Memory Operations
{"tool": "memory_set", "arguments": {"key": "name", "value": "John"}}
{"tool": "memory_get", "arguments": {"key": "name"}}
SQLite Operations
{"tool": "sqlite_create_table", "arguments": {"name": "users", "columns": [{"name": "id", "type": "INTEGER"}]}}
{"tool": "sqlite_query", "arguments": {"query": "SELECT * FROM users"}}
File Operations
{"tool": "file_write", "arguments": {"path": "test.txt", "content": "Hello"}}
{"tool": "file_read", "arguments": {"path": "test.txt"}}
Project Structure
mcp-chatbot/
āāā src/
ā āāā main.rs # Main application entry
ā āāā llm_client.rs # LLM client implementation
ā āāā mcp_server.rs # MCP server core
ā āāā protocol.rs # Protocol definitions
ā āāā sqlite_server.rs # SQLite server implementation
ā āāā stdio_server.rs # Standard I/O server
ā āāā rag_server.rs # RAG server implementation
ā āāā whisper_server.rs # Whisper server implementation
ā āāā utils.rs # Utility functions
āāā tests/
ā āāā sqlite_test.rs # SQLite tests
ā āāā rag_server_test.rs # RAG server tests
āāā Cargo.toml # Project dependencies
āāā mcp_prompts.yaml # System prompts configuration
āāā README.md # Project documentation
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
- arkSong - Initial work - [email protected]