oobabooga 5f3f3faa96 Better handle CUDA out of memory errors in chat mode il y a 2 ans
..
GPTQ_loader.py 1cb9246160 Adapt to the new model names il y a 2 ans
LoRA.py a21e580782 Move an import il y a 2 ans
RWKV.py 09b0a3aafb Add repetition_penalty il y a 2 ans
callbacks.py b246d17513 Fix `type object is not subscriptable` il y a 2 ans
chat.py 5f3f3faa96 Better handle CUDA out of memory errors in chat mode il y a 2 ans
deepspeed_parameters.py f38c9bf428 Fix deepspeed (oops) il y a 3 ans
extensions.py 1edfb96778 Fix loading extensions from within the interface il y a 2 ans
html_generator.py 8579fe51dd Fix new lines in the HTML tab il y a 2 ans
llamacpp_model.py 2c52310642 Add --threads flag for llama.cpp il y a 2 ans
models.py 3a47a602a3 Detect ggml*.bin files automatically il y a 2 ans
shared.py b0890a7925 Add shared.is_chat() function il y a 2 ans
text_generation.py b0890a7925 Add shared.is_chat() function il y a 2 ans
training.py 58349f44a0 Handle training exception for unsupported models il y a 2 ans
ui.py d30a14087f Further reorganize the UI il y a 2 ans