MCP LLM logo

MCP LLM

by sammcj

An MCP server that provides access to LLMs using the LlamaIndexTS library. It offers tools for code generation, documentation, and answering questions.

View on GitHub

Last updated: N/A

What is MCP LLM?

MCP LLM is an MCP server that leverages the LlamaIndexTS library to provide access to various Large Language Models (LLMs). It allows users to interact with LLMs through a set of predefined tools.

How to use MCP LLM?

The server can be installed either through Smithery or manually from the source. After installation, you can interact with the server by sending requests to its endpoints. The README provides examples of JSON payloads for each tool, such as generating code, generating documentation, and asking questions.

Key features of MCP LLM

  • generate_code: Generate code based on a description

  • generate_code_to_file: Generate code and write it directly to a file at a specific line number

  • generate_documentation: Generate documentation for code

  • ask_question: Ask a question to the LLM

Use cases of MCP LLM

  • Automated code generation based on natural language descriptions.

  • Automatic documentation generation for existing codebases.

  • Answering questions related to code or general knowledge.

  • Integrating LLMs into MCP workflows for various tasks.

FAQ from MCP LLM

How do I install the server?

You can install the server either via Smithery or manually from source. Instructions for both methods are provided in the README.

What is LlamaIndexTS?

LlamaIndexTS is a library used by the server to interact with Large Language Models.

Can I use relative file paths with generate_code_to_file?

Yes, the generate_code_to_file tool supports both relative and absolute file paths. Relative paths are resolved relative to the current working directory of the MCP server.

What kind of code documentation can be generated?

The generate_documentation tool supports generating documentation in various formats, including JSDoc.

How do I send requests to the server?

The README includes an example script that demonstrates how to send requests to the server using curl commands.