Cronologia Commit

Autore SHA1 Messaggio Data
  oobabooga 65dda28c9d Rename --llama-bits to --gptq-bits 2 anni fa
  oobabooga fed3617f07 Move LLaMA 4-bit into a separate file 2 anni fa
  draff 001e638b47 Make it actually work 2 anni fa
  draff 804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2 anni fa
  ItsLogic 9ba8156a70 remove unnecessary Path() 2 anni fa
  draff e6c631aea4 Replace --load-in-4bit with --llama-bits 2 anni fa
  oobabooga e9dbdafb14 Merge branch 'main' into pt-path-changes 2 anni fa
  oobabooga 706a03b2cb Minor changes 2 anni fa
  oobabooga de7dd8b6aa Add comments 2 anni fa
  oobabooga e461c0b7a0 Move the import to the top 2 anni fa
  deepdiffuser 9fbd60bf22 add no_split_module_classes to prevent tensor split error 2 anni fa
  deepdiffuser ab47044459 add multi-gpu support for 4bit gptq LLaMA 2 anni fa
  rohvani 2ac2913747 fix reference issue 2 anni fa
  rohvani 826e297b0e add llama-65b-4bit support & multiple pt paths 2 anni fa
  oobabooga 9849aac0f1 Don't show .pt models in the list 2 anni fa
  oobabooga 74102d5ee4 Insert to the path instead of appending 2 anni fa
  oobabooga 2965aa1625 Check if the .pt file exists 2 anni fa
  oobabooga 828a524f9a Add LLaMA 4-bit support 2 anni fa
  oobabooga e91f4bc25a Add RWKV tokenizer 2 anni fa
  oobabooga c33715ad5b Move towards HF LLaMA implementation 2 anni fa
  oobabooga bd8aac8fa4 Add LLaMA 8-bit support 2 anni fa
  oobabooga ed8b35efd2 Add --pin-weight parameter for FlexGen 2 anni fa
  oobabooga ea5c5eb3da Add LLaMA support 2 anni fa
  oobabooga 659bb76722 Add RWKVModel class 2 anni fa
  oobabooga 6837d4d72a Load the model by name 2 anni fa
  oobabooga 70e522732c Move RWKV loader into a separate file 2 anni fa
  oobabooga ebc64a408c RWKV support prototype 2 anni fa
  oobabooga 8e3e8a070f Make FlexGen work with the newest API 2 anni fa
  oobabooga 65326b545a Move all gradio elements to shared (so that extensions can use them) 2 anni fa
  oobabooga f6f792363b Separate command-line params by spaces instead of commas 2 anni fa