64 lines
1.4 KiB
Markdown
64 lines
1.4 KiB
Markdown
# Pre-historic Knowledge Assistant Web App
|
|
|
|
A web application for the RAG-based knowledge assistant.
|
|
|
|
## Features
|
|
|
|
- Multiple LLM provider support (Azure OpenAI, OpenAI, Ollama, vLLM, custom endpoints)
|
|
- Flexible embedding configuration
|
|
- Web interface with real-time responses
|
|
- REST API endpoints
|
|
- Health check endpoint
|
|
|
|
## Setup
|
|
|
|
1. Install dependencies:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
2. Create `.env` file from `.env.example`:
|
|
```bash
|
|
cp .env.example .env
|
|
# Edit .env with your configuration
|
|
```
|
|
|
|
3. Run the application:
|
|
```bash
|
|
python app.py
|
|
# or
|
|
./run.sh
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### LLM Providers
|
|
|
|
The application supports multiple LLM providers:
|
|
|
|
- **Azure OpenAI**: Set `LLM_PROVIDER=azure_openai` and configure Azure credentials
|
|
- **OpenAI**: Set `LLM_PROVIDER=openai` and provide API key
|
|
- **Ollama**: Set `LLM_PROVIDER=ollama` and configure host URL
|
|
- **vLLM**: Set `LLM_PROVIDER=vllm` and configure vLLM host
|
|
- **Custom**: Set `LLM_PROVIDER=custom` for any OpenAI-compatible endpoint
|
|
|
|
### Environment Variables
|
|
|
|
See `.env.example` for all available configuration options.
|
|
|
|
## API Endpoints
|
|
|
|
- `GET /`: Web interface
|
|
- `POST /ask`: Process question via web form
|
|
- `POST /api/ask`: REST API endpoint for questions
|
|
- `GET /api/health`: Health check endpoint
|
|
|
|
## Development
|
|
|
|
To run in development mode with auto-reload:
|
|
|
|
```bash
|
|
python app.py
|
|
```
|
|
|
|
The application will be available at `http://localhost:8000`. |