Historique des commits

Auteur SHA1 Message Date
  oobabooga 8e89bc596b Fix encode() for RWKV il y a 2 ans
  oobabooga 19a34941ed Add proper streaming to RWKV il y a 2 ans
  oobabooga 8660227e1b Add top_k to RWKV il y a 2 ans
  oobabooga 20bd645f6a Fix bug in multigpu setups (attempt 3) il y a 2 ans
  oobabooga 09a7c36e1b Minor improvement while running custom models il y a 2 ans
  oobabooga 24c4c20391 Fix bug in multigpu setups (attempt #2) il y a 2 ans
  oobabooga d88b7836c6 Fix bug in multigpu setups il y a 2 ans
  oobabooga e91f4bc25a Add RWKV tokenizer il y a 2 ans
  oobabooga a54b91af77 Improve readability il y a 2 ans
  oobabooga 8e706df20e Fix a memory leak when text streaming is on il y a 2 ans
  oobabooga c33715ad5b Move towards HF LLaMA implementation il y a 2 ans
  oobabooga c93f1fa99b Count the tokens more conservatively il y a 2 ans
  oobabooga 05e703b4a4 Print the performance information more reliably il y a 2 ans
  oobabooga a345a2acd2 Add a tokenizer placeholder il y a 2 ans
  oobabooga 5b354817f6 Make chat minimally work with LLaMA il y a 2 ans
  oobabooga ea5c5eb3da Add LLaMA support il y a 2 ans
  oobabooga 7bbe32f618 Don't return a value in an iterator function il y a 2 ans
  oobabooga ff9f649c0c Remove some unused imports il y a 2 ans
  oobabooga 955cf431e8 Minor consistency fix il y a 2 ans
  oobabooga 831ac7ed3f Add top_p il y a 2 ans
  oobabooga 7c4d5ca8cc Improve the text generation call a bit il y a 2 ans
  oobabooga 0f6708c471 Sort the imports il y a 2 ans
  oobabooga e735806c51 Add a generate() function for RWKV il y a 2 ans
  oobabooga f871971de1 Trying to get the chat to work il y a 2 ans
  oobabooga ebd698905c Add streaming to RWKV il y a 2 ans
  oobabooga 70e522732c Move RWKV loader into a separate file il y a 2 ans
  oobabooga ebc64a408c RWKV support prototype il y a 2 ans
  oobabooga 6e843a11d6 Fix FlexGen in chat mode il y a 2 ans
  oobabooga fa58fd5559 Proper way to free the cuda cache il y a 2 ans
  oobabooga 700311ce40 Empty the cuda cache at model.generate() il y a 2 ans