oobabooga 113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) il y a 2 ans
..
GPTQ_loader.py 39f3fec913 Broaden GPTQ-for-LLaMA branch support (#820) il y a 2 ans
LoRA.py a21e580782 Move an import il y a 2 ans
RWKV.py 09b0a3aafb Add repetition_penalty il y a 2 ans
api.py 3f3e42e26c Refactor several function calls and the API il y a 2 ans
callbacks.py b246d17513 Fix `type object is not subscriptable` il y a 2 ans
chat.py e94ab5dac1 Minor fixes il y a 2 ans
deepspeed_parameters.py f38c9bf428 Fix deepspeed (oops) il y a 3 ans
extensions.py 1edfb96778 Fix loading extensions from within the interface il y a 2 ans
html_generator.py 8203ce0cac Stop character pic from being cached when changing chars or clearing. (#798) il y a 2 ans
llamacpp_model.py 2c52310642 Add --threads flag for llama.cpp il y a 2 ans
llamacpp_model_alternative.py 03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) il y a 2 ans
models.py 113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) il y a 2 ans
shared.py 378d21e80c Add LLaMA-Precise preset (#767) il y a 2 ans
text_generation.py 113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) il y a 2 ans
training.py 0c7ef26981 Lora trainer improvements (#763) il y a 2 ans
ui.py d30a14087f Further reorganize the UI il y a 2 ans