Files
mcphost-api/README.md
2025-05-11 19:35:54 +03:00

1.9 KiB

MCPHost Server

A lightweight server providing an OpenAI-compatible API interface to interact with MCPHost.

Prerequisites

  • MCPHost binary

Installation

# Clone the repository
git clone <repository-url>
cd <repository-directory>

# Install dependencies
pip install -r requirements.txt

Configuration

Configure the application by setting the following environment variables or updating settings.py:

  • MCPHOST_PATH: Path to the MCPHost binary
  • MCPHOST_CONFIG: Path to MCPHost configuration file
  • MCPHOST_MODEL: Model to use with MCPHost
  • HOST: Host to bind the server to (default: 0.0.0.0)
  • PORT: Port to run the server on (default: 8000)
  • DEBUG: Enable debug mode (true/false)

Usage

Start the OpenAI-compatible API server:

python serve_mcphost.py

Or using uvicorn directly:

uvicorn serve_mcphost_openai_compatible:app --host 0.0.0.0 --port 8000

API Endpoints

  • GET /v1/models: List available models
  • GET /v1/models/{model_id}: Get details of a specific model
  • POST /v1/chat/completions: Send a chat completion request
  • GET /health: Check server health
  • GET /: Root endpoint with API information

Testing with curl

List available models:

curl http://localhost:8000/v1/models

Get model details:

curl http://localhost:8000/v1/models/mcphost-model

Send a chat completion request:

curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mcphost-model",
    "messages": [{"role": "user", "content": "Hello, how are you?"}],
    "stream": false
  }'

Stream a chat completion request:

curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mcphost-model",
    "messages": [{"role": "user", "content": "Tell me a short story"}],
    "stream": true
  }'

Check server health:

curl http://localhost:8000/health