Deepwiki MCP Server
by instructa
An unofficial Deepwiki MCP Server that crawls Deepwiki URLs, converts them to Markdown, and returns either a single document or a list of pages. It's designed to be used with MCP-compatible clients for fetching and processing Deepwiki content.
Last updated: N/A
What is Deepwiki MCP Server?
Deepwiki MCP Server is a tool that allows you to fetch and convert content from Deepwiki into Markdown format. It uses the MCP (Meta-Control Protocol) to receive Deepwiki URLs, crawls the relevant pages, sanitizes the HTML, and provides the output in either a single aggregated document or a structured list of pages.
How to use Deepwiki MCP Server?
To use the Deepwiki MCP Server, you need an MCP-compatible client like Cursor. Configure the client to use the server by adding the provided JSON configuration to your .cursor/mcp.json
file. Then, use the deepwiki_fetch
action with the Deepwiki URL as a parameter. You can specify the output mode (aggregate or pages) and the maximum crawl depth.
Key features of Deepwiki MCP Server
Domain Safety: Only processes URLs from deepwiki.com
HTML Sanitization: Strips headers, footers, navigation, scripts, and ads
Link Rewriting: Adjusts links to work in Markdown
Multiple Output Formats: Get one document or structured pages
Performance: Fast crawling with adjustable concurrency and depth
NLP: It's to search just for the library name
Use cases of Deepwiki MCP Server
Fetching complete documentation from Deepwiki repositories.
Extracting specific pages or sections from Deepwiki for AI processing.
Integrating Deepwiki content into MCP-compatible workflows.
Creating Markdown versions of Deepwiki documentation for offline use.
FAQ from Deepwiki MCP Server
How do I configure the server to use HTTP transport?
How do I configure the server to use HTTP transport?
Run the server with the --http
flag and specify the port using --port
. For example: docker run -d -p 3000:3000 mcp-deepwiki --http --port 3000
How do I handle timeout errors when crawling large repositories?
How do I handle timeout errors when crawling large repositories?
Increase the DEEPWIKI_REQUEST_TIMEOUT
and DEEPWIKI_MAX_CONCURRENCY
environment variables. For example: DEEPWIKI_REQUEST_TIMEOUT=60000 DEEPWIKI_MAX_CONCURRENCY=10 npx mcp-deepwiki
What output formats are supported?
What output formats are supported?
The server supports two output modes: aggregate
for a single Markdown document and pages
for a structured list of pages.
Can I limit the crawling depth?
Can I limit the crawling depth?
Yes, you can specify the maxDepth
parameter in the MCP request to limit the maximum depth of pages to crawl.
How do I contribute to the project?
How do I contribute to the project?
Please see the CONTRIBUTING.md file for details on how to contribute.