This website requires JavaScript.
Explore
Help
Register
Sign In
alihan
/
text-generation-inference
Watch
1
Star
0
Fork
0
You've already forked text-generation-inference
mirror of
https://github.com/huggingface/text-generation-inference.git
synced
2023-08-23 10:47:54 +03:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
042180d88f91d4bc9acd42ae4de3c0236d272de4
text-generation-inference
/
server
/
text_generation
History
OlivierDehaene
042180d88f
fix(server): Only pad to multiple of 8 on GPUs
2022-12-08 19:37:37 +01:00
..
models
fix(server): Only pad to multiple of 8 on GPUs
2022-12-08 19:37:37 +01:00
pb
feat(server): Support all AutoModelForCausalLM on a best effort basis
2022-10-28 19:24:00 +02:00
__init__.py
feat(server): Support all AutoModelForCausalLM on a best effort basis
2022-10-28 19:24:00 +02:00
cache.py
feat(server): Support AutoModelForSeq2SeqLM
2022-11-04 18:03:04 +01:00
cli.py
feat(server): Support all AutoModelForCausalLM on a best effort basis
2022-10-28 19:24:00 +02:00
server.py
feat(server): Support AutoModelForSeq2SeqLM
2022-11-04 18:03:04 +01:00
utils.py
feat(server): Add model tests (
#6
)
2022-12-08 18:49:33 +01:00