Commit History

Autor SHA1 Mensaxe Data
  Thomas Antony 7745faa7bb Add llamacpp to models.py %!s(int64=2) %!d(string=hai) anos
  oobabooga 1cb9246160 Adapt to the new model names %!s(int64=2) %!d(string=hai) anos
  oobabooga 53da672315 Fix FlexGen %!s(int64=2) %!d(string=hai) anos
  oobabooga ee95e55df6 Fix RWKV tokenizer %!s(int64=2) %!d(string=hai) anos
  oobabooga fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir %!s(int64=2) %!d(string=hai) anos
  oobabooga 49c10c5570 Add support for the latest GPTQ models with group-size (#530) %!s(int64=2) %!d(string=hai) anos
  catalpaaa b37c54edcf lora-dir, model-dir and login auth %!s(int64=2) %!d(string=hai) anos
  oobabooga a6bf54739c Revert models.py (accident) %!s(int64=2) %!d(string=hai) anos
  oobabooga a80aa65986 Update models.py %!s(int64=2) %!d(string=hai) anos
  oobabooga ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control %!s(int64=2) %!d(string=hai) anos
  oobabooga e26763a510 Minor changes %!s(int64=2) %!d(string=hai) anos
  Wojtek Kowaluk 7994b580d5 clean up duplicated code %!s(int64=2) %!d(string=hai) anos
  Wojtek Kowaluk 30939e2aee add mps support on apple silicon %!s(int64=2) %!d(string=hai) anos
  oobabooga ee164d1821 Don't split the layers in 8-bit mode by default %!s(int64=2) %!d(string=hai) anos
  oobabooga e085cb4333 Small changes %!s(int64=2) %!d(string=hai) anos
  awoo 83cb20aad8 Add support for --gpu-memory witn --load-in-8bit %!s(int64=2) %!d(string=hai) anos
  oobabooga 1c378965e1 Remove unused imports %!s(int64=2) %!d(string=hai) anos
  oobabooga 66256ac1dd Make the "no GPU has been detected" message more descriptive %!s(int64=2) %!d(string=hai) anos
  oobabooga 265ba384b7 Rename a file, add deprecation warning for --load-in-4bit %!s(int64=2) %!d(string=hai) anos
  Ayanami Rei 8778b756e6 use updated load_quantized %!s(int64=2) %!d(string=hai) anos
  Ayanami Rei e1c952c41c make argument non case-sensitive %!s(int64=2) %!d(string=hai) anos
  Ayanami Rei 3c9afd5ca3 rename method %!s(int64=2) %!d(string=hai) anos
  Ayanami Rei edbc61139f use new quant loader %!s(int64=2) %!d(string=hai) anos
  oobabooga 65dda28c9d Rename --llama-bits to --gptq-bits %!s(int64=2) %!d(string=hai) anos
  oobabooga fed3617f07 Move LLaMA 4-bit into a separate file %!s(int64=2) %!d(string=hai) anos
  draff 001e638b47 Make it actually work %!s(int64=2) %!d(string=hai) anos
  draff 804486214b Re-implement --load-in-4bit and update --llama-bits arg description %!s(int64=2) %!d(string=hai) anos
  ItsLogic 9ba8156a70 remove unnecessary Path() %!s(int64=2) %!d(string=hai) anos
  draff e6c631aea4 Replace --load-in-4bit with --llama-bits %!s(int64=2) %!d(string=hai) anos
  oobabooga e9dbdafb14 Merge branch 'main' into pt-path-changes %!s(int64=2) %!d(string=hai) anos