mirror of
https://github.com/langchain-ai/mcpdoc.git
synced 2025-10-19 03:18:14 +03:00
Update README (#5)
This commit is contained in:
213
README.md
213
README.md
@@ -1,65 +1,178 @@
|
||||
# MCP LLMS-TXT Documentation Server
|
||||
|
||||
The MCP LLMS-TXT Documentation Server is a specialized Model Control Protocol (MCP) server that delivers documentation directly from llms.txt files. It serves as a testbed for integrating documentation into IDEs via external **tools**, rather than relying solely on built-in features. While future IDEs may offer robust native support for llms.txt files, this server allows us to experiment with alternative methods, giving us full control over how documentation is retrieved and displayed.
|
||||
## Overview
|
||||
|
||||
[llms.txt](https://llmstxt.org/) is a standard index of website contents to help LLMs. As an example, [LangGraph's llms.txt](https://langchain-ai.github.io/langgraph/llms.txt) provides a curated list of LangGraph doc URLs with a short description of each one. An LLM can use this file to decide which pages to read when accomplishing tasks, and pairs well with IDEs like Cursor and Windsurf or applications like Claude Code/Desktop.
|
||||
|
||||
However, these applications use different built-in tools to read and process files like `llms.txt`; sometimes IDEs will reflect on the `llms.txt` file and use it for formulate *web search queries* rather than retrieving the specific URLs listed! More broadly, there can be poor visibility into what applications are doing with their built-in retrieval / search tools.
|
||||
|
||||
[MCP](https://github.com/modelcontextprotocol) offers a way for developers to define tools that give us *full control* over how documentation is retrieved and displayed to LLMs in these applications. Here, we create [a simple MCP server](https://github.com/modelcontextprotocol) that defines a few basical external **tools** that these applications can use: 1) to tool to load `llms.txt` and 2) fetch specific URLs within `llms.txt`. When these tools are used, the user can customize retrieval and audit the tool calls / the context returned to better understand what is happening under the hood.
|
||||
|
||||

|
||||
|
||||
## Quickstart
|
||||
|
||||
Create a virtual environment and install uv:
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source .venv/bin/activate
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
|
||||
Install the package:
|
||||
```bash
|
||||
uv pip install mcpdoc
|
||||
```
|
||||
|
||||
Select an `llms.txt` file to use. For example, here's the LangGraph `llms.txt`
|
||||
```bash
|
||||
https://langchain-ai.github.io/langgraph/llms.txt
|
||||
```
|
||||
|
||||
Run the MCP server locally with whatever `llms.txt` file you want to use:
|
||||
```bash
|
||||
uvx --from mcpdoc mcpdoc \
|
||||
--urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt \
|
||||
--transport sse \
|
||||
--port 8081 \
|
||||
--host localhost
|
||||
```
|
||||
|
||||
Run MCP inspector and connect to the running server via SSE at http://localhost:8081/sse:
|
||||
```bash
|
||||
npx @modelcontextprotocol/inspector
|
||||
```
|
||||
|
||||
Here, you can test the `tool` calls.
|
||||
|
||||
Finally, add the server to any MCP host applications of interest.
|
||||
|
||||
Below, we walk through each one, but here are the the config files that are updated for each:
|
||||
|
||||
```
|
||||
*Cursor*
|
||||
`~/.cursor/mcp.json`
|
||||
|
||||
*Windsurf*
|
||||
`~/.codeium/windsurf/mcp_config.json`
|
||||
|
||||
*Claude Desktop*
|
||||
`~/Library/Application\ Support/Claude/claude_desktop_config.json`
|
||||
|
||||
*Claude Code*
|
||||
`~/.claude.json`
|
||||
```
|
||||
|
||||
These will be updated with our server specification, as shown below.
|
||||
|
||||
> NOTE: It appears that `stdio` transport required for Windsurf and Cursor.
|
||||
|
||||
```
|
||||
{
|
||||
"mcpServers": {
|
||||
"langgraph-docs-mcp": {
|
||||
"command": "/Users/rlm/.local/bin/uvx",
|
||||
"args": [
|
||||
"--from",
|
||||
"mcpdoc",
|
||||
"mcpdoc",
|
||||
"--urls",
|
||||
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt",
|
||||
"--transport",
|
||||
"stdio",
|
||||
"--port",
|
||||
"8081",
|
||||
"--host",
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Cursor
|
||||
### Cursor
|
||||
|
||||
1. Install Cursor: https://www.cursor.com/en
|
||||
2. Launch the MCP server in **SSE** transport.
|
||||
|
||||
```shell
|
||||
uvx --from mcpdoc mcpdoc \
|
||||
--urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt \
|
||||
--transport sse \
|
||||
--port 8081
|
||||
--host localhost
|
||||
```
|
||||
Setup:
|
||||
* Ensure `~/.cursor/mcp.json` is updated to include the server.
|
||||
* `Settings -> MCP` to confirm that the server is connected.
|
||||
* `Control-L` to open chat.
|
||||
* Ensure `agent` is selected.
|
||||
|
||||
3. Add the mcp server to Cursor. Remember to put the URL as **[host]/sse** for example **http://localhost:8081/sse**.
|
||||

|
||||
|
||||
Cursor needs to be in **agent** mode for this to work.
|
||||
Then, try an example prompt:
|
||||
```
|
||||
use the langgraph-docs-mcp server to answer any LangGraph questions --
|
||||
+ call get_docs tool to get the available llms.txt file
|
||||
+ call fetch_docs tool to read it
|
||||
+ reflect on the urls in llms.txt
|
||||
+ reflect on the input question
|
||||
+ call fetch_docs on any urls relevant to the question
|
||||
+ use this to answer the question
|
||||
|
||||
5. You should be able to use it within composer now.
|
||||
what are types of memory in LangGraph?
|
||||
```
|
||||
|
||||
* It will ask to approve tool calls.
|
||||
|
||||

|
||||
|
||||
### Windsurf
|
||||
|
||||
Setup:
|
||||
* Ensure `~/.codeium/windsurf/mcp_config.json` is updated to include the server.
|
||||
* `Control-L` to open Cascade.
|
||||
* Available MCP servers will be listed.
|
||||
|
||||

|
||||
|
||||
Then, try the example prompt:
|
||||
* It will perform your tool calls.
|
||||
|
||||

|
||||
|
||||
### Claude Desktop
|
||||
|
||||
Setup:
|
||||
* Open `Settings -> Developer` to update the config.
|
||||
* Restart Claude.
|
||||
|
||||

|
||||
|
||||
* You will see your tools.
|
||||
|
||||

|
||||
|
||||
Then, try the example prompt:
|
||||
|
||||
* It will ask to approve tool calls.
|
||||
|
||||

|
||||
|
||||
### Claude Code
|
||||
|
||||
1. Install Claude Code: https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview
|
||||
2. Install [uv](https://github.com/astral-sh/uv). This step is required if you want to run the MCP server in using `uvx` command. This is generally recommended as it'll simplify all the dependency management for you.
|
||||
3. Configure the MCP server with claude code
|
||||
Setup:
|
||||
* Shortcut to add the MCP server to your project:
|
||||
```
|
||||
claude mcp add-json langgraph-docs '{"type":"stdio","command":"uvx" ,"args":["--from", "mcpdoc", "mcpdoc", "--urls", "langgraph:https://langchain-ai.github.io/langgraph/llms.txt"]}' -s project
|
||||
```
|
||||
* Test
|
||||
```
|
||||
$ Claude
|
||||
$ /mcp
|
||||
```
|
||||
|
||||
```shell
|
||||
claude mcp add-json langgraph-docs '{"type":"stdio","command":"uvx" ,"args":["--from", "mcpdoc", "mcpdoc", "--urls", "langgraph:https://langchain-ai.github.io/langgraph/llms.txt"]}' -s user
|
||||
```
|
||||

|
||||
|
||||
4. Launch claude code
|
||||
Then, try the example prompt:
|
||||
|
||||
```shell
|
||||
claude code
|
||||
```
|
||||
|
||||
Verify that the server is running by typing `/mcp` in the chat window.
|
||||
* It will ask to approve tool calls.
|
||||
|
||||
```
|
||||
> /mcp
|
||||
```
|
||||

|
||||
|
||||
5. Test it out!
|
||||
|
||||
```
|
||||
> Write a langgraph application with two agents that debate the merits of taking a shower.
|
||||
```
|
||||
|
||||
|
||||
This MCP server was only configured with LangGraph documentation, but you can add more documentation sources by adding more `--urls` arguments or loading it from a JSON file or a YAML file.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Command-line Interface
|
||||
## Command-line Interface
|
||||
|
||||
The `mcpdoc` command provides a simple CLI for launching the documentation server. You can specify documentation sources in three ways, and these can be combined:
|
||||
|
||||
@@ -93,7 +206,7 @@ You can also combine these methods to merge documentation sources:
|
||||
mcpdoc --yaml sample_config.yaml --json sample_config.json --urls https://langchain-ai.github.io/langgraph/llms.txt
|
||||
```
|
||||
|
||||
### Additional Options
|
||||
## Additional Options
|
||||
|
||||
- `--follow-redirects`: Follow HTTP redirects (defaults to False)
|
||||
- `--timeout SECONDS`: HTTP request timeout in seconds (defaults to 10.0)
|
||||
@@ -106,11 +219,11 @@ mcpdoc --yaml sample_config.yaml --follow-redirects --timeout 15
|
||||
|
||||
This will load the LangGraph Python documentation with a 15-second timeout and follow any HTTP redirects if necessary.
|
||||
|
||||
### Configuration Format
|
||||
## Configuration Format
|
||||
|
||||
Both YAML and JSON configuration files should contain a list of documentation sources. Each source must include an `llms_txt` URL and can optionally include a `name`:
|
||||
|
||||
#### YAML Configuration Example (sample_config.yaml)
|
||||
### YAML Configuration Example (sample_config.yaml)
|
||||
|
||||
```yaml
|
||||
# Sample configuration for mcp-mcpdoc server
|
||||
@@ -119,7 +232,7 @@ Both YAML and JSON configuration files should contain a list of documentation so
|
||||
llms_txt: https://langchain-ai.github.io/langgraph/llms.txt
|
||||
```
|
||||
|
||||
#### JSON Configuration Example (sample_config.json)
|
||||
### JSON Configuration Example (sample_config.json)
|
||||
|
||||
```json
|
||||
[
|
||||
@@ -130,7 +243,7 @@ Both YAML and JSON configuration files should contain a list of documentation so
|
||||
]
|
||||
```
|
||||
|
||||
### Programmatic Usage
|
||||
## Programmatic Usage
|
||||
|
||||
```python
|
||||
from mcpdoc.main import create_server
|
||||
|
||||
Reference in New Issue
Block a user