oobabooga
|
d8e950d6bd
Don't load the model twice when using --lora
|
2 年之前 |
oobabooga
|
fd99995b01
Make the Stop button more consistent in chat mode
|
2 年之前 |
oobabooga
|
4f5c2ce785
Fix chat_generation_attempts
|
2 年之前 |
oobabooga
|
04417b658b
Update README.md
|
2 年之前 |
oobabooga
|
bb4cb22453
Download .pt files using download-model.py (for 4-bit models)
|
2 年之前 |
oobabooga
|
143b5b5edf
Mention one-click-bandaid in the README
|
2 年之前 |
oobabooga
|
8747c74339
Another missing import
|
2 年之前 |
oobabooga
|
7078d168c3
Missing import
|
2 年之前 |
oobabooga
|
d1327f99f9
Fix broken callbacks.py
|
2 年之前 |
oobabooga
|
9bdb3c784d
Minor fix
|
2 年之前 |
oobabooga
|
b0abb327d8
Update LoRA.py
|
2 年之前 |
oobabooga
|
bf22d16ebc
Clear cache while switching LoRAs
|
2 年之前 |
oobabooga
|
4578e88ffd
Stop the bot from talking for you in chat mode
|
2 年之前 |
oobabooga
|
9bf6ecf9e2
Fix LoRA device map (attempt)
|
2 年之前 |
oobabooga
|
c5ebcc5f7e
Change the default names (#518)
|
2 年之前 |
oobabooga
|
29bd41d453
Fix LoRA in CPU mode
|
2 年之前 |
oobabooga
|
eac27f4f55
Make LoRAs work in 16-bit mode
|
2 年之前 |
oobabooga
|
bfa81e105e
Fix FlexGen streaming
|
2 年之前 |
oobabooga
|
7b6f85d327
Fix markdown headers in light mode
|
2 年之前 |
oobabooga
|
de6a09dc7f
Properly separate the original prompt from the reply
|
2 年之前 |
oobabooga
|
d5fc1bead7
Merge pull request #489 from Brawlence/ext-fixes
|
2 年之前 |
oobabooga
|
bfb1be2820
Minor fix
|
2 年之前 |
oobabooga
|
0abff499e2
Use image.thumbnail
|
2 年之前 |
oobabooga
|
104212529f
Minor changes
|
2 年之前 |
wywywywy
|
61346b88ea
Add "seed" menu in the Parameters tab
|
2 年之前 |
Φφ
|
5389fce8e1
Extensions performance & memory optimisations
|
2 年之前 |
oobabooga
|
45b7e53565
Only catch proper Exceptions in the text generation function
|
2 年之前 |
oobabooga
|
6872ffd976
Update README.md
|
2 年之前 |
oobabooga
|
db4219a340
Update comments
|
2 年之前 |
oobabooga
|
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
|
2 年之前 |