oobabooga f3a2e0b8a9 Disable pre_layer when the model type is not llama 2 anos atrás
..
GPTQ_loader.py f3a2e0b8a9 Disable pre_layer when the model type is not llama 2 anos atrás
LoRA.py a21e580782 Move an import 2 anos atrás
RWKV.py 09b0a3aafb Add repetition_penalty 2 anos atrás
callbacks.py b246d17513 Fix `type object is not subscriptable` 2 anos atrás
chat.py ae1fe45bc0 One more cache reset 2 anos atrás
deepspeed_parameters.py f38c9bf428 Fix deepspeed (oops) 3 anos atrás
extensions.py 1edfb96778 Fix loading extensions from within the interface 2 anos atrás
html_generator.py 8ef89730a5 Try to better handle browser image cache 2 anos atrás
llamacpp_model.py 2c52310642 Add --threads flag for llama.cpp 2 anos atrás
models.py 4ab679480e allow quantized model to be loaded from model dir (#760) 2 anos atrás
shared.py 65d8a24a6d Show profile pictures in the Character tab 2 anos atrás
text_generation.py b0890a7925 Add shared.is_chat() function 2 anos atrás
training.py 2a267011dc Use Path.stem for simplicity 2 anos atrás
ui.py d30a14087f Further reorganize the UI 2 anos atrás