# MCPHost Server A lightweight server providing an OpenAI-compatible API interface to interact with MCPHost. ## Prerequisites - MCPHost binary ## Installation ```bash # Clone the repository git clone cd # Install dependencies pip install -r requirements.txt ``` ## Configuration Configure the application by setting the following environment variables or updating `settings.py`: - `MCPHOST_PATH`: Path to the MCPHost binary - `MCPHOST_CONFIG`: Path to MCPHost configuration file - `MCPHOST_MODEL`: Model to use with MCPHost - `HOST`: Host to bind the server to (default: 0.0.0.0) - `PORT`: Port to run the server on (default: 8000) - `DEBUG`: Enable debug mode (true/false) ## Usage Start the OpenAI-compatible API server: ```bash python serve_mcphost.py ``` Or using uvicorn directly: ```bash uvicorn serve_mcphost_openai_compatible:app --host 0.0.0.0 --port 8000 ``` ## API Endpoints - `GET /v1/models`: List available models - `GET /v1/models/{model_id}`: Get details of a specific model - `POST /v1/chat/completions`: Send a chat completion request - `GET /health`: Check server health - `GET /`: Root endpoint with API information ## Testing with curl List available models: ```bash curl http://localhost:8000/v1/models ``` Get model details: ```bash curl http://localhost:8000/v1/models/mcphost-model ``` Send a chat completion request: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "mcphost-model", "messages": [{"role": "user", "content": "Hello, how are you?"}], "stream": false }' ``` Stream a chat completion request: ```bash curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "mcphost-model", "messages": [{"role": "user", "content": "Tell me a short story"}], "stream": true }' ``` Check server health: ```bash curl http://localhost:8000/health ```