MCPHost Server
A lightweight server providing an OpenAI-compatible API interface to interact with MCPHost.
Prerequisites
- MCPHost binary
Installation
# Clone the repository
git clone <repository-url>
cd <repository-directory>
# Install dependencies
pip install -r requirements.txt
Configuration
Configure the application by setting the following environment variables or updating settings.py:
MCPHOST_PATH: Path to the MCPHost binaryMCPHOST_CONFIG: Path to MCPHost configuration fileMCPHOST_MODEL: Model to use with MCPHostHOST: Host to bind the server to (default: 0.0.0.0)PORT: Port to run the server on (default: 8000)DEBUG: Enable debug mode (true/false)
Usage
Start the OpenAI-compatible API server:
python serve_mcphost.py
Or using uvicorn directly:
uvicorn serve_mcphost_openai_compatible:app --host 0.0.0.0 --port 8000
API Endpoints
GET /v1/models: List available modelsGET /v1/models/{model_id}: Get details of a specific modelPOST /v1/chat/completions: Send a chat completion requestGET /health: Check server healthGET /: Root endpoint with API information
Testing with curl
List available models:
curl http://localhost:8000/v1/models
Get model details:
curl http://localhost:8000/v1/models/mcphost-model
Send a chat completion request:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mcphost-model",
"messages": [{"role": "user", "content": "Hello, how are you?"}],
"stream": false
}'
Stream a chat completion request:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mcphost-model",
"messages": [{"role": "user", "content": "Tell me a short story"}],
"stream": true
}'
Check server health:
curl http://localhost:8000/health
Description
Languages
Python
95.7%
Shell
4.3%