Serge - LLaMa made easy 🦙
A chat interface based on llama.cpp for running Alpaca models. Entirely self-hosted, no API keys needed. Fits on 4GB of RAM and runs on the CPU.
- SvelteKit frontend
- MongoDB for storing chat history & parameters
- FastAPI + beanie for the API, wrapping calls to
llama.cpp
Getting started
Setting up Serge is very easy. TLDR for running it with Alpaca 7B:
git clone https://github.com/nsarrazin/serge.git
cd serge
docker compose up --build -d
docker compose exec serge python3 /usr/src/app/api/utils/download.py tokenizer 7B
Windows
⚠️ For cloning on windows, use git clone https://github.com/nsarrazin/serge.git --config core.autocrlf=input.
Make sure you have docker desktop installed, WSL2 configured and enough free RAM to run models. (see below)
Kubernetes
Setting up Serge on Kubernetes can be found in the wiki: https://github.com/nsarrazin/serge/wiki/Integrating-Serge-in-your-orchestration#kubernetes-example
Using serge
(You can pass 7B 13B 30B as an argument to the download.py script to download multiple models.)
Then just go to http://localhost:8008/ and you're good to go!
The API is available at http://localhost:8008/api/
Models
Currently only the 7B, 13B and 30B alpaca models are supported. There's a download script for downloading them inside of the container, described above.
If you have existing weights from another project you can add them to the serge_weights volume using docker cp.
⚠️ A note on memory usage
llama will just crash if you don't have enough available memory for your model.
- 7B requires about 4.5GB of free RAM
- 13B requires about 12GB free
- 30B requires about 20GB free
Support
Feel free to join the discord if you need help with the setup: https://discord.gg/62Hc6FEYQH
Contributing
Serge is always open for contributions! If you catch a bug or have a feature idea, feel free to open an issue or a PR.
If you want to run Serge in development mode (with hot-module reloading for svelte & autoreload for FastAPI) you can do so like this:
docker compose -f docker-compose.dev.yml up -d --build
What's next
- Front-end to interface with the API
- Pass model parameters when creating a chat
- User profiles & authentication
- Different prompt options
- LangChain integration with a custom LLM
- Support for other llama models, quantization, etc.
And a lot more!