No Description

oobabooga 39d47bb8a9 Update the installation instructions (fixes #1) 3 years ago
LICENSE ad71774a24 Initial commit 3 years ago
README.md 39d47bb8a9 Update the installation instructions (fixes #1) 3 years ago
convert-to-torch.py fac55e70f7 Add file 3 years ago
server.py 838f768437 Add files 3 years ago
webui.png f1e69e1042 Add files via upload 3 years ago

README.md

text-generation-webui

A gradio webui for running large language models locally. Supports gpt-j-6B, gpt-neox-20b, opt, galactica, and many others.

webui screenshot

Installation

Create a conda environment:

conda create -n textgen
conda activate textgen

Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

Install the requirements:

pip install -r requirements.txt

Downloading models

Models should be placed under models/model-name.

Hugging Face

Hugging Face is the main place to download models. For instance, here you can find the files for the model gpt-j-6B.

The files that you need to download and put under models/gpt-j-6B are the json, txt, and pytorch*.bin files. The remaining files are not necessary.

GPT-4chan

GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:

Starting the webui

conda activate textgen
python server.py

Then browse to http://localhost:7860/?__theme=dark