Commit History

Author SHA1 Message Date
  oobabooga 2de9f122cd Update README.md 2 years ago
  oobabooga e91f4bc25a Add RWKV tokenizer 2 years ago
  oobabooga c855b828fe Better handle <USER> 2 years ago
  oobabooga 145c725c39 Bump RWKV version 2 years ago
  oobabooga 2af66a4d4c Fix <USER> in pygmalion replies 2 years ago
  oobabooga a54b91af77 Improve readability 2 years ago
  oobabooga 8e706df20e Fix a memory leak when text streaming is on 2 years ago
  oobabooga 5492e2e9f8 Add sentencepiece 2 years ago
  oobabooga 90206204aa Merge pull request #163 from oobabooga/hf_llama 2 years ago
  oobabooga c33715ad5b Move towards HF LLaMA implementation 2 years ago
  oobabooga bd8aac8fa4 Add LLaMA 8-bit support 2 years ago
  oobabooga c93f1fa99b Count the tokens more conservatively 2 years ago
  oobabooga 736f61610b Update README 2 years ago
  oobabooga ed8b35efd2 Add --pin-weight parameter for FlexGen 2 years ago
  oobabooga 05e703b4a4 Print the performance information more reliably 2 years ago
  oobabooga 5a79863df3 Increase the sequence length, decrease batch size 2 years ago
  oobabooga e62b9b1074 Revamp the "Default" preset with HF defaults 2 years ago
  oobabooga a345a2acd2 Add a tokenizer placeholder 2 years ago
  oobabooga 4cc36dc434 Tweak the Naive preset (for LLaMA/RWKV) 2 years ago
  oobabooga 5b354817f6 Make chat minimally work with LLaMA 2 years ago
  oobabooga ea5c5eb3da Add LLaMA support 2 years ago
  oobabooga 2bff646130 Stop chat from flashing dark when processing 2 years ago
  oobabooga 7c70e0e2a6 Fix the download script (sort of) 2 years ago
  oobabooga bcea196c9d Bump flexgen version 2 years ago
  oobabooga 76378c6cc2 Update README 2 years ago
  oobabooga 169209805d Model-aware prompts and presets 2 years ago
  oobabooga 024d30d1b4 Reorder imports 2 years ago
  oobabooga 7bbe32f618 Don't return a value in an iterator function 2 years ago
  oobabooga ff9f649c0c Remove some unused imports 2 years ago
  oobabooga 1a05860ca3 Ensure proper no-streaming with generation_attempts > 1 2 years ago