Just Prompt
by disler
Just Prompt is a lightweight Model Control Protocol (MCP) server that provides a unified interface to various Large Language Model (LLM) providers. It supports providers like OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.
Last updated: N/A
What is Just Prompt?
Just Prompt is a Model Control Protocol (MCP) server that acts as a unified interface for interacting with multiple Large Language Model (LLM) providers. It simplifies the process of sending prompts to different models and managing their responses.
How to use Just Prompt?
To use Just Prompt, you need to clone the repository, install the dependencies using uv sync
, and configure your API keys in a .env
file. Then, you can use the provided MCP tools like prompt
, prompt_from_file
, and prompt_from_file_to_file
to send prompts to various LLMs. You can also list available providers and models using list_providers
and list_models
.
Key features of Just Prompt
Unified API for multiple LLM providers
Support for text prompts from strings or files
Run multiple models in parallel
Automatic model name correction
Ability to save responses to files
Easy listing of available providers and models
Use cases of Just Prompt
Comparing the outputs of different LLMs for the same prompt
Automating prompt execution across multiple LLM providers
Building applications that leverage multiple LLMs
Testing and evaluating the performance of different LLMs
Creating a centralized interface for managing LLM interactions
FAQ from Just Prompt
What LLM providers are supported?
What LLM providers are supported?
Just Prompt supports OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.
How do I specify which models to use?
How do I specify which models to use?
You can specify models using the --models-prefixed-by-provider
parameter, with the provider name as a prefix (e.g., openai:gpt-4o
).
How do I set default models?
How do I set default models?
You can set default models using the --default-models
parameter when starting the server. The first model in the list is used for model name correction.
How do I save responses to files?
How do I save responses to files?
Use the prompt_from_file_to_file
tool, which saves responses as markdown files in a specified directory.
How do I enable thinking tokens for Anthropic Claude?
How do I enable thinking tokens for Anthropic Claude?
Add a suffix to the model name in the format :1k
, :4k
, or :8000
(e.g., anthropic:claude-3-7-sonnet-20250219:4k
).