فهرست منبع

Update README.md

oobabooga 2 سال پیش
والد
کامیت
012f4f83b8
1فایلهای تغییر یافته به همراه38 افزوده شده و 1 حذف شده
  1. 38 1
      README.md

+ 38 - 1
README.md

@@ -173,6 +173,8 @@ Then browse to
 
 Optionally, you can use the following command-line flags:
 
+#### Basic settings
+
 | Flag             | Description |
 |------------------|-------------|
 | `-h`, `--help`   | show this help message and exit |
@@ -187,29 +189,64 @@ Optionally, you can use the following command-line flags:
 | `--settings SETTINGS_FILE` | Load the default interface settings from this json file. See `settings-template.json` for an example. If you create a file called `settings.json`, this file will be loaded by default without the need to use the `--settings` flag.|
 |  `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
 |  `--verbose`                                | Print the prompts to the terminal. |
+
+#### Accelerate/transformers
+
+| Flag             | Description |
+|------------------|-------------|
 | `--cpu`          | Use the CPU to generate text.|
 | `--auto-devices` | Automatically split the model across the available GPU(s) and CPU.|
 |  `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` |  Maxmimum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
-| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99.|
+| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
 | `--disk`         | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
 | `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
 | `--load-in-8bit` | Load the model with 8-bit precision.|
 | `--bf16`         | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
 | `--no-cache`     | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
+
+#### llama.cpp
+
+| Flag             | Description |
+|------------------|-------------|
 | `--threads`     | Number of threads to use in llama.cpp. |
+
+#### GPTQ
+
+| Flag             | Description |
+|------------------|-------------|
 | `--wbits WBITS`            | GPTQ: Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
 | `--model_type MODEL_TYPE`  | GPTQ: Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
 | `--groupsize GROUPSIZE`    | GPTQ: Group size. |
 | `--pre_layer PRE_LAYER`    | GPTQ: The number of layers to preload. |
+
+#### FlexGen
+
+| Flag             | Description |
+|------------------|-------------|
 | `--flexgen`      | Enable the use of FlexGen offloading. |
 |  `--percent PERCENT [PERCENT ...]` |  FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). |
 |  `--compress-weight` |  FlexGen: Whether to compress weight (default: False).|
 |  `--pin-weight [PIN_WEIGHT]` |       FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). |
+
+#### DeepSpeed
+
+| Flag             | Description |
+|------------------|-------------|
 | `--deepspeed`    | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
 | `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
 | `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
+
+#### RWKV
+
+| Flag             | Description |
+|------------------|-------------|
 |  `--rwkv-strategy RWKV_STRATEGY` |    RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
 |  `--rwkv-cuda-on` |   RWKV: Compile the CUDA kernel for better performance. |
+
+#### Gradio
+
+| Flag             | Description |
+|------------------|-------------|
 |  `--listen`                                 | Make the web UI reachable from your local network. |
 |  `--listen-port LISTEN_PORT`                | The listening port that the server will use. |
 |  `--share`                                  | Create a public URL. This is useful for running the web UI on Google Colab or similar. |