mirror of
https://github.com/abetlen/llama-cpp-python.git
synced 2023-09-07 17:34:22 +03:00
Update README to use cli options for server
This commit is contained in:
11
README.md
11
README.md
@@ -68,18 +68,9 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
|
|||||||
|
|
||||||
To install the server package and get started:
|
To install the server package and get started:
|
||||||
|
|
||||||
Linux/MacOS
|
|
||||||
```bash
|
```bash
|
||||||
pip install llama-cpp-python[server]
|
pip install llama-cpp-python[server]
|
||||||
export MODEL=./models/7B/ggml-model.bin
|
python3 -m llama_cpp.server --model models/7B/ggml-model.bin
|
||||||
python3 -m llama_cpp.server
|
|
||||||
```
|
|
||||||
|
|
||||||
Windows
|
|
||||||
```cmd
|
|
||||||
pip install llama-cpp-python[server]
|
|
||||||
SET MODEL=..\models\7B\ggml-model.bin
|
|
||||||
python3 -m llama_cpp.server
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
|
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
|
||||||
|
|||||||
Reference in New Issue
Block a user