oobabooga
|
1a823aaeb5
Clear text input for chat (#715 from bmoconno/clear-chat-input)
|
il y a 2 ans |
oobabooga
|
0dc6fa038b
Use gr.State() to store the user input
|
il y a 2 ans |
oobabooga
|
5f3f3faa96
Better handle CUDA out of memory errors in chat mode
|
il y a 2 ans |
Brian O'Connor
|
d0f9625f0b
Clear text input for chat
|
il y a 2 ans |
oobabooga
|
b0890a7925
Add shared.is_chat() function
|
il y a 2 ans |
oobabooga
|
b38ba230f4
Update download-model.py
|
il y a 2 ans |
oobabooga
|
b6f817be45
Update README.md
|
il y a 2 ans |
oobabooga
|
88fa38ac01
Update README.md
|
il y a 2 ans |
oobabooga
|
526d5725db
Update download-model.py
|
il y a 2 ans |
oobabooga
|
4b57bd0d99
Update README.md
|
il y a 2 ans |
oobabooga
|
b53bec5a1f
Update README.md
|
il y a 2 ans |
oobabooga
|
9160586c04
Update README.md
|
il y a 2 ans |
oobabooga
|
7ec11ae000
Update README.md
|
il y a 2 ans |
oobabooga
|
b857f4655b
Update shared.py
|
il y a 2 ans |
oobabooga
|
012f4f83b8
Update README.md
|
il y a 2 ans |
oobabooga
|
fcda3f8776
Add also_return_rows to generate_chat_prompt
|
il y a 2 ans |
oobabooga
|
8c51b405e4
Progress towards generalizing Interface mode tab
|
il y a 2 ans |
oobabooga
|
23116b88ef
Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads)
|
il y a 2 ans |
oobabooga
|
74462ac713
Don't override the metadata when checking the sha256sum
|
il y a 2 ans |
oobabooga
|
2c52310642
Add --threads flag for llama.cpp
|
il y a 2 ans |
oobabooga
|
eeafd60713
Fix streaming
|
il y a 2 ans |
oobabooga
|
52065ae4cd
Add repetition_penalty
|
il y a 2 ans |
oobabooga
|
2259143fec
Fix llama.cpp with --no-stream
|
il y a 2 ans |
oobabooga
|
875de5d983
Update ggml template
|
il y a 2 ans |
oobabooga
|
cbfe0b944a
Update README.md
|
il y a 2 ans |
oobabooga
|
6a44f4aec6
Add support for downloading ggml files
|
il y a 2 ans |
oobabooga
|
3a47a602a3
Detect ggml*.bin files automatically
|
il y a 2 ans |
oobabooga
|
0aee7341d8
Properly count tokens/s for llama.cpp in chat mode
|
il y a 2 ans |
oobabooga
|
5c4e44b452
llama.cpp documentation
|
il y a 2 ans |
oobabooga
|
6fd70d0032
Add llama.cpp support (#447 from thomasantony/feature/llamacpp)
|
il y a 2 ans |