Logo
Explore Help
Register Sign In
alihan/llama-cpp-python
1
0
Fork 0
You've already forked llama-cpp-python
mirror of https://github.com/abetlen/llama-cpp-python.git synced 2023-09-07 17:34:22 +03:00
Code Issues Packages Projects Releases Wiki Activity
Files
76a82babef9703b814ae4cea28cc63c2340ed743
llama-cpp-python/llama_cpp
History
MillionthOdin16 76a82babef Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
..
server
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
__init__.py
Black formatting
2023-03-24 14:59:29 -04:00
llama_cpp.py
Bugfix: wrong signature for quantize function
2023-04-04 22:36:59 -04:00
llama_types.py
Bugfix for Python3.7
2023-04-05 04:37:33 -04:00
llama.py
Make Llama instance pickleable. Closes #27
2023-04-05 06:52:17 -04:00
Powered by Gitea Version: 1.24.6 Page: 35ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API