Update README and docs

This commit is contained in:
Andrei Betlen
2023-04-05 17:44:25 -04:00
parent 267d3648fc
commit 38f7dea6ca
2 changed files with 54 additions and 4 deletions

View File

@@ -15,7 +15,7 @@ This package provides:
- OpenAI-like API
- LangChain compatibility
# Installation
## Installation
Install from PyPI:
@@ -23,7 +23,7 @@ Install from PyPI:
pip install llama-cpp-python
```
# Usage
## High-level API
```python
>>> from llama_cpp import Llama
@@ -51,6 +51,27 @@ pip install llama-cpp-python
}
```
## Web Server
`llama-cpp-python` offers a web server which aims to act as a drop-in replacement for the OpenAI API.
This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
```bash
pip install llama-cpp-python[server]
export MODEL=./models/7B
python3 -m llama_cpp.server
```
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
## Low-level API
The low-level API is a direct `ctypes` binding to the C API provided by `llama.cpp`.
The entire API can be found in [llama_cpp/llama_cpp.py](https://github.com/abetlen/llama-cpp-python/blob/master/llama_cpp/llama_cpp.py) and should mirror [llama.h](https://github.com/ggerganov/llama.cpp/blob/master/llama.h).
# Documentation
Documentation is available at [https://abetlen.github.io/llama-cpp-python](https://abetlen.github.io/llama-cpp-python).