mentor-mcp-server logo

mentor-mcp-server

by cyanheads

A Model Context Protocol server providing LLM Agents a second opinion via AI-powered Deepseek-Reasoning (R1) mentorship capabilities. Set your LLM Agent up for success with expert second opinions and actionable insights.

View on GitHub

Last updated: N/A

What is mentor-mcp-server?

The mentor-mcp-server is a Model Context Protocol (MCP) server that provides LLM Agents with AI-powered mentorship capabilities using Deepseek-Reasoning (R1). It offers tools for code review, design critique, writing feedback, and idea brainstorming, acting as a second opinion provider for AI models.

How to use mentor-mcp-server?

To use the server, you need to clone the repository, install dependencies, and build the project. Then, configure your MCP client to connect to the server by providing the command, arguments, and environment variables, including your Deepseek API key. You can then use the provided tools by sending requests in XML format with the appropriate server name, tool name, and arguments.

Key features of mentor-mcp-server

  • Comprehensive code reviews with bug detection and prevention

  • UI/UX design critiques and architectural diagram analysis

  • Writing feedback and improvement with grammar and style analysis

  • Feature enhancement brainstorming and innovation suggestions

Use cases of mentor-mcp-server

  • Providing LLM Agents with expert second opinions on code

  • Improving the quality and consistency of design documents

  • Enhancing the clarity and effectiveness of written content

  • Generating innovative ideas for feature enhancements

FAQ from mentor-mcp-server

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) enables communication between clients, servers, and LLM Agents.

What is Deepseek-Reasoning (R1)?

Deepseek-Reasoning (R1) is an AI model used by the server to provide mentorship capabilities.

What kind of code reviews does the server provide?

The server provides comprehensive code reviews with bug detection, style evaluation, performance optimization suggestions, and security vulnerability assessment.

What design aspects can the server critique?

The server can critique UI/UX design, analyze architectural diagrams, recommend design patterns, evaluate accessibility, and check for consistency.

What environment variables are required to run the server?

The server requires the DEEPSEEK_API_KEY environment variable. DEEPSEEK_MODEL is also required, but has a default value. Other optional variables include DEEPSEEK_MAX_TOKENS, DEEPSEEK_MAX_RETRIES, and DEEPSEEK_TIMEOUT.