Nenhuma descrição

oobabooga dd70f7edd5 Add the default folders 3 anos atrás
models dd70f7edd5 Add the default folders 3 anos atrás
presets c65bad40dc Add support for presets 3 anos atrás
torch-dumps dd70f7edd5 Add the default folders 3 anos atrás
LICENSE ad71774a24 Initial commit 3 anos atrás
README.md ee88d02292 Small fix 3 anos atrás
convert-to-torch.py fac55e70f7 Add file 3 anos atrás
requirements.txt 874cd6ff3f Add the requirements.txt file (duh) 3 anos atrás
server.py 9498dca748 Make model autodetect all gpt-neo and opt models 3 anos atrás
webui.png f1e69e1042 Add files via upload 3 anos atrás

README.md

text-generation-webui

A gradio webui for running large language models locally. Supports gpt-j-6B, gpt-neox-20b, opt, galactica, and many others.

Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.

webui screenshot

Installation

Create a conda environment:

conda create -n textgen
conda activate textgen

Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

Install the requirements:

pip install -r requirements.txt

Downloading models

Models should be placed under models/model-name. For instance, models/gpt-j-6B for gpt-j-6B.

Hugging Face

Hugging Face is the main place to download models. These are some of my favorite:

The files that you need to download are the json, txt, and pytorch*.bin files. The remaining files are not necessary.

GPT-4chan

GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:

Converting to pytorch

The script convert-to-torch.py allows you to convert models to .pt format, which is about 10x faster to load:

python convert-to-torch.py models/model-name/

The output model will be saved to torch-dumps/model-name.pt. When you load a new model, the webui first looks for this .pt file; if it is not found, it loads the model as usual from models/model-name/.

Starting the webui

conda activate textgen
python server.py

Then browse to http://localhost:7860/?__theme=dark

Presets

Inference settings presets can be created under presets/ as text files. These files are detected automatically at startup.

Contributing

Pull requests are welcome.