This website requires JavaScript.
Explore
Help
Register
Sign In
alihan
/
llama-cpp-python
Watch
1
Star
0
Fork
0
You've already forked llama-cpp-python
mirror of
https://github.com/abetlen/llama-cpp-python.git
synced
2023-09-07 17:34:22 +03:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
76a82babef9703b814ae4cea28cc63c2340ed743
llama-cpp-python
/
llama_cpp
History
MillionthOdin16
76a82babef
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
..
server
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
__init__.py
Black formatting
2023-03-24 14:59:29 -04:00
llama_cpp.py
Bugfix: wrong signature for quantize function
2023-04-04 22:36:59 -04:00
llama_types.py
Bugfix for Python3.7
2023-04-05 04:37:33 -04:00
llama.py
Make Llama instance pickleable.
Closes
#27
2023-04-05 06:52:17 -04:00