Update README.md

I didn't test the PYTHONPATH addition; I solved that problem by "cp llava/eval/run_llava.py ."
This commit is contained in:
joris-sense
2024-08-27 23:55:14 +02:00
committed by GitHub
parent afd373ad21
commit b7f50f4fe8

View File

@@ -136,8 +136,16 @@ You can find more references in this folder: ```scripts/more```.
## Inference
You can try our ```LLaVA-MORE``` in the Image-To-Text task by running the following script.
You can try our ```LLaVA-MORE``` in the Image-To-Text task using the following steps:
- Get a Huggingface account, get approved for access to LLama 3.1, and create a read-only token to download the models,
- export neccesary variables as follows:
``` python
export PYTHONPATH="${PYTHONPATH}:."
export HF_TOKEN=<your_huggingface_token>
export TOKENIZER_PATH=meta-llama/Meta-Llama-3.1-8B-Instruct
```
- run the following script:
```
python -u llava/eval/run_llava.py
```
If you get out-of-memory problems, consider loading the model weights in 8 bit (```load_in_8bit=True```).