7.0 KiB
Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!
Update: exo is hiring. See here for more details.
Get Involved
exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.
We also welcome contributions from the community. We have a list of bounties in this sheet.
Features
Wide Model Support
exo supports LLaMA (MLX and tinygrad) and other popular models.
Dynamic Model Partitioning
exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.
Automatic Device Discovery
exo will automatically discover other devices using the best method available. Zero manual configuration.
ChatGPT-compatible API
exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.
Device Equality
Unlike other distributed inference frameworks, exo does not use a master-worker architecture. Instead, exo devices connect p2p. As long as a device is connected somewhere in the network, it can be used to run models.
Exo supports different partitioning strategies to split up a model across devices. The default partitioning strategy is ring memory weighted partitioning. This runs an inference in a ring where each device runs a number of model layers proportional to the memory of the device.
Installation
The current recommended way to install exo is from source.
Prerequisites
- Python>=3.12.0 is required because of issues with asyncio in previous versions.
From source
git clone https://github.com/exo-explore/exo.git
cd exo
pip install .
# alternatively, with venv
source install.sh
Troubleshooting
- If running on Mac, MLX has an install guide with troubleshooting steps.
Performance
- There are a number of things users have empirically found to improve performance on Apple Silicon Macs:
- Upgrade to the latest version of MacOS 15.
- Run
./configure_mlx.sh. This runs commands to optimize GPU memory allocation on Apple Silicon Macs.
Documentation
Example Usage on Multiple MacOS Devices
Device 1:
python3 main.py
Device 2:
python3 main.py
That's it! No configuration required - exo will automatically discover the other device(s).
exo starts a ChatGPT-like WebUI (powered by tinygrad tinychat) on http://localhost:8000
For developers, exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000/v1/chat/completions. Example with curls:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.1-8b",
"messages": [{"role": "user", "content": "What is the meaning of exo?"}],
"temperature": 0.7
}'
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llava-1.5-7b-hf",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What are these?"
},
{
"type": "image_url",
"image_url": {
"url": "http://images.cocodataset.org/val2017/000000039769.jpg"
}
}
]
}
],
"temperature": 0.0
}'
Example Usage on Multiple Heterogenous Devices (MacOS + Linux)
Device 1 (MacOS):
python3 main.py --inference-engine tinygrad
Here we explicitly tell exo to use the tinygrad inference engine.
Device 2 (Linux):
python3 main.py
Linux devices will automatically default to using the tinygrad inference engine.
You can read about tinygrad-specific env vars here. For example, you can configure tinygrad to use the cpu by specifying CLANG=1.
Debugging
Enable debug logs with the DEBUG environment variable (0-9).
DEBUG=9 python3 main.py
For the tinygrad inference engine specifically, there is a separate DEBUG flag TINYGRAD_DEBUG that can be used to enable debug logs (1-6).
TINYGRAD_DEBUG=2 python3 main.py
Known Issues
- 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. We have decided for now not to put out the buggy iOS version and receive a bunch of GitHub issues for outdated code. We are working on solving this properly and will make an announcement when it's ready. If you would like access to the iOS implementation now, please email alex@exolabs.net with your GitHub username explaining your use-case and you will be granted access on GitHub.
Inference Engines
exo supports the following inference engines:

