2023-10-20 10:09:28 +02:00
2023-10-11 00:54:27 +02:00
2023-10-11 00:54:27 +02:00
2023-10-20 10:09:28 +02:00
2023-10-11 00:54:27 +02:00
2023-10-20 10:09:28 +02:00

Docker image for LLaVA: Large Language and Vision Assistant

Installs

  • Ubuntu 22.04 LTS
  • CUDA 11.8
  • Python 3.10.12
  • LLaVA v1.1.1
  • Torch 2.0.1
  • BakLLaVA-1 model

Available on RunPod

This image is designed to work on RunPod. You can use my custom RunPod template to launch it on RunPod.

Running Locally

Install Nvidia CUDA Driver

Start the Docker container

docker run -d \
  --gpus all \
  -v /workspace \
  -p 3000:3001 \
  -p 8888:8888 \
  -e JUPYTER_PASSWORD=Jup1t3R! \
  ashleykza/llava:latest

You can obviously substitute the image name and tag with your own.

Models

Important

If you select the 13b model, CUDA will result in OOM errors with a GPU that has less than 48GB of VRAM, so A6000 or higher is recommended.

You can add an environment called MODEL to your Docker container to specify the model that should be downloaded. If the MODEL environment variable is not set, the model will default to SkunkworksAI/BakLLaVA-1.

Model Environment Variable Value Default
llava-v1.5-13b liuhaotian/llava-v1.5-13b no
llava-v1.5-7b liuhaotian/llava-v1.5-7b no
BakLLaVA-1 SkunkworksAI/BakLLaVA-1 yes

Acknowledgements

  1. Matthew Berman for giving me a demo on LLaVA, as well as his amazing YouTube videos.

Community and Contributing

Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.

You can contact me and get help with deploying your container to RunPod on the RunPod Discord Server below, my username is ashleyk.

Discord Banner 2

Appreciate my work?

Buy Me A Coffee

Languages
Shell 52.2%
Dockerfile 29.7%
HTML 18.1%