OpenAI Complete MCP Server
by aiamblichus
This MCP server provides a clean interface for LLMs to use text completion capabilities through the MCP protocol. It acts as a bridge between an LLM client and any OpenAI's compatible API, primarily for base models.
Last updated: N/A
What is OpenAI Complete MCP Server?
An MCP (Model Context Protocol) server that provides a clean interface for LLMs to use text completion capabilities through the MCP protocol. It acts as a bridge between an LLM client and any OpenAI's compatible API.
How to use OpenAI Complete MCP Server?
First, clone the repository, install dependencies using pnpm, and build the project. Configure the required environment variables (OPENAI_API_KEY, OPENAI_API_BASE, OPENAI_MODEL). Then, start the server using pnpm start
. Alternatively, you can use Docker to build and run the server, providing the necessary environment variables or using a .env file.
Key features of OpenAI Complete MCP Server
Provides a single tool named 'complete' for generating text completions
Properly handles asynchronous processing to avoid blocking
Implements timeout handling with graceful fallbacks
Supports cancellation of ongoing requests
Use cases of OpenAI Complete MCP Server
Integrating base LLMs with MCP clients
Providing a standardized text completion interface
Building tools that require text generation capabilities
Bridging different LLM APIs
FAQ from OpenAI Complete MCP Server
What is the primary use case?
What is the primary use case?
The primary use case is for base models, as the server does not provide support for chat completions.
What environment variables are required?
What environment variables are required?
The required environment variables are OPENAI_API_KEY, OPENAI_API_BASE, and OPENAI_MODEL.
How do I install the server?
How do I install the server?
Clone the repository, install dependencies using pnpm install
, and build the project using pnpm run build
.
How do I start the server?
How do I start the server?
Run pnpm start
to start the server on stdio.
What parameters are available for the 'complete' tool?
What parameters are available for the 'complete' tool?
The parameters are prompt
(required), max_tokens
, temperature
, top_p
, frequency_penalty
, and presence_penalty
.