* Refactor slider for better mobile use
* ensureInitialized
* permission for local network 0.0.0.0
* check if no model is saved
* move GenerationManager out setstate
The llama.cpp logic is built around the prompt ending with the
reverse-prompt and the actual user input being passed separately.
Adjust Sherpa to do the same, rather than appending the first line of
user input to the prompt.
On first run on my Android device, the pre-prompt is empty, it does
not get initialized to any value.
This is because SharedPreferences performs asynchronous disk I/O,
and initDefaultPrompts() uses a different SharedPreferences instance from
getPrePrompts(). There's no guarantee that a preferences update on one
instance will become immediately available in another.
Tweak the logic to not depend on synchronization between two
SharedPreferences instances.
Update llama.cpp to the latest version as part of an effort to make this
app usable on my Samsung Galaxy S10 smartphone.
The newer llama.cpp includes a double-close fix which was causing the app
to immediately crash upon starting the AI conversation (llama.cpp commit
47f61aaa5f76d04).
It also adds support for 3B models, which are considerably smaller. The
llama-7B models were causing Android's low memory killer to terminate
Sherpa after just a few words of conversation, whereas new models such as
orca-mini-3b.ggmlv3.q4_0.bin work on this device without quickly exhausting
all available memory.
llama.cpp's model compatibility has changed within this update, so ggml
files that were working in the previous version are unlikely to work now;
they need converting. However the orca-mini offering is already in the
new format and works out of the box.
llama.cpp's API has changed in this update. Rather than rework the Dart
code, I opted to leave it in C++, using llama.cpp's example code as a base.
This solution is included in a new "llamasherpa" library which calls
into llama.cpp. Since lots of data is passed around in large arrays,
I expect running this in Dart had quite some overhead, and this native
approach should perform considerably faster.
This eliminates the need for Sherpa's Dart code to call llama.cpp directly,
so there's no need to separately maintain a modified version of llama.cpp
and we can use the official upstream.
Set the main default prompt to chat-with-bob from llama.cpp.
This seems to produce much more useful conversations with llama-7b and
orca-mini-3b models that I have tested.
Also make the reverse prompt consistently "User:" in both default prompt
options, and set the default reverse prompt detection to the same value.