dependabot[bot]
|
39fa6e57cc
Bump llama-cpp-python from 0.1.30 to 0.1.32
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
5234071c04
Improve Instruct mode text readability
|
%!s(int64=2) %!d(string=hai) anos |
IggoOnCode
|
09d8119e3c
Add CPU LoRA training (#938)
|
%!s(int64=2) %!d(string=hai) anos |
Alex "mcmonkey" Goodwin
|
0caf718a21
add on-page documentation to parameters (#1008)
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
85a7954823
Update settings-template.json
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
d37b4f76b1
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
bd04ff27ad
Make the bos token optional
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
f035b01823
Update README.md
|
%!s(int64=2) %!d(string=hai) anos |
Jeff Lefebvre
|
b7ca89ba3f
Mention that build-essential is required (#1013)
|
%!s(int64=2) %!d(string=hai) anos |
loeken
|
52339e9b20
add make/g++ to docker (#1015)
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
4961f43702
Improve header bar colors
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
617530296e
Instruct mode color/style improvements
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
0f1627eff1
Don't treat Intruct mode histories as regular histories
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
d679c4be13
Change a label
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
45244ed125
More descriptive download info
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
7e70741a4e
Download models from Model tab (#954 from UsamaKenway/main)
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
11b23db8d4
Remove unused imports
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
2c14df81a8
Use download-model.py to download the model
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
c6e9ba20a4
Merge branch 'main' into UsamaKenway-main
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
843f672227
fix random seeds to actually randomize (#1004 from mcmonkey4eva/seed-fix)
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
769aa900ea
Print the used seed
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
32d078487e
Add llama-cpp-python to requirements.txt
|
%!s(int64=2) %!d(string=hai) anos |
Alex "mcmonkey" Goodwin
|
30befe492a
fix random seeds to actually randomize
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
1911504f82
Minor bug fix
|
%!s(int64=2) %!d(string=hai) anos |
BlueprintCoding
|
8178fde2cb
Added dropdown to character bias. (#986)
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
dba2000d2b
Do things that I am not proud of
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
65552d2157
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
%!s(int64=2) %!d(string=hai) anos |
oobabooga
|
8c6155251a
More robust 4-bit model loading
|
%!s(int64=2) %!d(string=hai) anos |
MarkovInequality
|
992663fa20
Added xformers support to Llama (#950)
|
%!s(int64=2) %!d(string=hai) anos |
Brian O'Connor
|
625d81f495
Update character log logic (#977)
|
%!s(int64=2) %!d(string=hai) anos |