Merge pull request #5 from joris-sense/patch-1
Ran into trouble trying out run_llava.py
This commit is contained in:
11
README.md
11
README.md
@@ -136,8 +136,17 @@ You can find more references in this folder: ```scripts/more```.
|
||||
|
||||
|
||||
## Inference
|
||||
You can try our ```LLaVA-MORE``` in the Image-To-Text task by running the following script.
|
||||
You can try our ```LLaVA-MORE``` with LLaMA 3.1 in the Image-To-Text task using the following script.
|
||||
``` python
|
||||
source activate more
|
||||
cd local/path/LLaVA-MORE
|
||||
export PYTHONPATH=.
|
||||
|
||||
# load the original llama 3.1 tokenizer using an active read-only hf_token
|
||||
export HF_TOKEN=hf_read_token
|
||||
# tokenizer_model_path
|
||||
export TOKENIZER_PATH=meta-llama/Meta-Llama-3.1-8B-Instruct
|
||||
|
||||
python -u llava/eval/run_llava.py
|
||||
```
|
||||
If you get out-of-memory problems, consider loading the model weights in 8 bit (```load_in_8bit=True```).
|
||||
|
||||
Reference in New Issue
Block a user