oobabooga
|
012f4f83b8
Update README.md
|
2 anni fa |
oobabooga
|
2c52310642
Add --threads flag for llama.cpp
|
2 anni fa |
oobabooga
|
cbfe0b944a
Update README.md
|
2 anni fa |
oobabooga
|
5c4e44b452
llama.cpp documentation
|
2 anni fa |
oobabooga
|
d4a9b5ea97
Remove redundant preset (see the plot in #587)
|
2 anni fa |
oobabooga
|
41b58bc47e
Update README.md
|
2 anni fa |
oobabooga
|
3b4447a4fe
Update README.md
|
2 anni fa |
oobabooga
|
5d0b83c341
Update README.md
|
2 anni fa |
oobabooga
|
c2a863f87d
Mention the updated one-click installer
|
2 anni fa |
oobabooga
|
010b259dde
Update documentation
|
2 anni fa |
oobabooga
|
036163a751
Change description
|
2 anni fa |
oobabooga
|
30585b3e71
Update README
|
2 anni fa |
oobabooga
|
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
|
2 anni fa |
oobabooga
|
70f9565f37
Update README.md
|
2 anni fa |
oobabooga
|
04417b658b
Update README.md
|
2 anni fa |
oobabooga
|
143b5b5edf
Mention one-click-bandaid in the README
|
2 anni fa |
oobabooga
|
6872ffd976
Update README.md
|
2 anni fa |
oobabooga
|
dd4374edde
Update README
|
2 anni fa |
oobabooga
|
9378754cc7
Update README
|
2 anni fa |
oobabooga
|
7ddf6147ac
Update README.md
|
2 anni fa |
oobabooga
|
ddb62470e9
--no-cache and --gpu-memory in MiB for fine VRAM control
|
2 anni fa |
oobabooga
|
0cbe2dd7e9
Update README.md
|
2 anni fa |
oobabooga
|
d2a7fac8ea
Use pip instead of conda for pytorch
|
2 anni fa |
oobabooga
|
a0b1a30fd5
Specify torchvision/torchaudio versions
|
2 anni fa |
oobabooga
|
a163807f86
Update README.md
|
2 anni fa |
oobabooga
|
a7acfa4893
Update README.md
|
2 anni fa |
oobabooga
|
dc35861184
Update README.md
|
2 anni fa |
oobabooga
|
f2a5ca7d49
Update README.md
|
2 anni fa |
oobabooga
|
8c8286b0e6
Update README.md
|
2 anni fa |
oobabooga
|
0c05e65e5c
Update README.md
|
2 anni fa |