oobabooga 0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2 年之前
..
GPTQ_loader.py 1cb9246160 Adapt to the new model names 2 年之前
LoRA.py a21e580782 Move an import 2 年之前
RWKV.py 09b0a3aafb Add repetition_penalty 2 年之前
callbacks.py b246d17513 Fix `type object is not subscriptable` 2 年之前
chat.py af65c12900 Change Stop button behavior 2 年之前
deepspeed_parameters.py f38c9bf428 Fix deepspeed (oops) 3 年之前
extensions.py 1edfb96778 Fix loading extensions from within the interface 2 年之前
html_generator.py 8579fe51dd Fix new lines in the HTML tab 2 年之前
llamacpp_model.py 9d1dcf880a General improvements 2 年之前
models.py 4c27562157 Minor changes 2 年之前
shared.py 1d1d9e40cd Add seed to settings 2 年之前
text_generation.py 0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2 年之前
training.py 58349f44a0 Handle training exception for unsupported models 2 年之前
ui.py d30a14087f Further reorganize the UI 2 年之前