MCP-Recon-Client
by seyrup1987
This is a Model Context Protocol (MCP) Client that enables communication with MCP Servers using open-source LLM models, providing access to tools for the LLM to utilize. It allows users to leverage LLMs for tool-calling functionalities.
Last updated: N/A
What is MCP-Recon-Client?
The MCP-Recon-Client is a client application designed to interact with MCP Servers using Large Language Models (LLMs). It facilitates communication between the LLM and the server, allowing the LLM to access and utilize various tools.
How to use MCP-Recon-Client?
To use the MCP-Recon-Client: 1. Clone the repository. 2. Install dependencies using UV pip install -r requirements.txt
. 3. Install Ollama Client and pull desired LLM models. 4. Update the '.env' file with your Google Studio API Key (optional). 5. Run src/main.py
using UV run src/main.py
.
Key features of MCP-Recon-Client
Communication with MCP Servers
Integration with open-source LLM models
Tool access for LLMs
Support for Google Gen AI (best results)
Configurable LLM models
Use cases of MCP-Recon-Client
Automated task execution through LLM tool calling
Integration of LLMs with external services
Experimentation with different LLM models for tool usage
Context-aware interactions with MCP servers
Prototyping LLM-driven applications
FAQ from MCP-Recon-Client
What is an MCP Server?
What is an MCP Server?
The README doesn't define what an MCP server is, but it is implied to be a server that the client can communicate with.
What LLM models are supported?
What LLM models are supported?
The client supports open-source LLM models, particularly those compatible with Ollama. Google Gen AI is recommended for its large context window.
Do I need a Google Studio API Key?
Do I need a Google Studio API Key?
A Google Studio API Key is optional, but recommended for optimal performance with Google Gen AI.
What is the best LLM model to use?
What is the best LLM model to use?
The README suggests that google-gemini-2.5-pro provides the best results due to its large context window, but other models can be used with modifications.
What if I encounter issues with tool calling?
What if I encounter issues with tool calling?
Tool calling functionality is heavily dependent on correct prompting. Experiment with different prompts to improve performance.