mirror of
https://github.com/ahmetoner/whisper-asr-webservice.git
synced 2023-04-14 03:48:29 +03:00
Update README.md
Added documentation on howto cache the ASR models
This commit is contained in:
10
README.md
10
README.md
@@ -143,6 +143,16 @@ docker run -d --gpus all -p 9000:9000 whisper-asr-webservice-gpu
|
||||
docker run -d --gpus all -p 9000:9000 -e ASR_MODEL=base whisper-asr-webservice-gpu
|
||||
```
|
||||
|
||||
## Cache
|
||||
The ASR model is downloaded each time you start the container, using the large model this can take some time. If you want to decrease the time it takes to start your container by skipping the download, you can store the cache directory (/root/.cache/whisper) to an persistent storage. Next time you start your container the ASR Model will be taken from the cache instead of being downloaded again.
|
||||
|
||||
**Important this will prevent you from receiving any updates to the models.**
|
||||
|
||||
```sh
|
||||
docker run -d -p 9000:9000 -e ASR_MODEL=large -v //c/tmp/whisper:/root/.cache/whisper onerahmet/openai-whisper-asr-webservice:latest
|
||||
```
|
||||
|
||||
|
||||
## TODO
|
||||
|
||||
- Unit tests
|
||||
|
||||
Reference in New Issue
Block a user