提交历史

作者 SHA1 备注 提交日期
  oobabooga cdfa787bcb Update README 2 年之前
  oobabooga 3bda907727 Merge pull request #366 from oobabooga/lora 2 年之前
  oobabooga 614dad0075 Remove unused import 2 年之前
  oobabooga a717fd709d Sort the imports 2 年之前
  oobabooga 7d97287e69 Update settings-template.json 2 年之前
  oobabooga 29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2 年之前
  oobabooga 214dc6868e Several QoL changes related to LoRA 2 年之前
  oobabooga 4c130679c7 Merge pull request #377 from askmyteapot/Fix-Multi-gpu-GPTQ-Llama-no-tokens 2 年之前
  askmyteapot 53b6a66beb Update GPTQ_Loader.py 2 年之前
  oobabooga 0cecfc684c Add files 2 年之前
  oobabooga 104293f411 Add LoRA support 2 年之前
  oobabooga ee164d1821 Don't split the layers in 8-bit mode by default 2 年之前
  oobabooga 0a2aa79c4e Merge pull request #358 from mayaeary/8bit-offload 2 年之前
  oobabooga e085cb4333 Small changes 2 年之前
  oobabooga dd1c5963da Update README 2 年之前
  oobabooga 38d7017657 Add all command-line flags to "Interface mode" 2 年之前
  awoo 83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2 年之前
  oobabooga 23a5e886e1 The LLaMA PR has been merged into transformers 2 年之前
  oobabooga d54f3f4a34 Add no-stream checkbox to the interface 2 年之前
  oobabooga 1c378965e1 Remove unused imports 2 年之前
  oobabooga a577fb1077 Keep GALACTICA special tokens (#300) 2 年之前
  oobabooga 25a00eaf98 Add "Experimental" warning 2 年之前
  oobabooga 599d3139fd Increase the reload timeout a bit 2 年之前
  oobabooga 4d64a57092 Add Interface mode tab 2 年之前
  oobabooga b50172255a Merge branch 'main' of github.com:oobabooga/text-generation-webui 2 年之前
  oobabooga ffb898608b Mini refactor 2 年之前
  oobabooga d3a280e603 Merge pull request #348 from mayaeary/feature/koboldai-api-share 2 年之前
  oobabooga 445ebf0ba8 Update README.md 2 年之前
  awoo 0552ab2e9f flask_cloudflared for shared tunnels 2 年之前
  oobabooga e9e76bb06c Delete WSL.md 2 年之前