Historia zmian

Autor SHA1 Wiadomość Data
  oobabooga e61138bdad Minor fixes 3 lat temu
  oobabooga 2181fca709 Better defaults for chat 3 lat temu
  oobabooga 83808171d3 Add --share option for Colab 3 lat temu
  oobabooga 8d788874d7 Add support for characters 3 lat temu
  oobabooga 3121f4788e Fix uploading chat log in --chat mode 3 lat temu
  oobabooga 849e4c7f90 Better way of finding the generated reply in the output string 3 lat temu
  oobabooga d03b0ad7a8 Implement saving/loading chat logs (#9) 3 lat temu
  oobabooga 39bfea5a22 Add a progress bar 3 lat temu
  oobabooga 5390fc87c8 add auto-devices when disk is used 3 lat temu
  oobabooga 759da435e3 Release 8-bit models memory 3 lat temu
  oobabooga 7ace04864a Implement sending layers to disk with --disk (#10) 3 lat temu
  oobabooga 93fa9bbe01 Clean up the streaming implementation 3 lat temu
  oobabooga c90310e40e Small simplification 3 lat temu
  oobabooga 99536ef5bf Add no-stream option 3 lat temu
  oobabooga 116299b3ad Manual eos_token implementation 3 lat temu
  oobabooga 3cb30bed0a Add a "stop" button 3 lat temu
  oobabooga 8f27d33034 Fix another bug 3 lat temu
  oobabooga 6c7f187586 Minor change 3 lat temu
  oobabooga b3cba0b330 Bug 3 lat temu
  oobabooga df2e910421 Stop generating in chat mode when \nYou: is generated 3 lat temu
  oobabooga 022960a087 This is the correct way of sampling 1 token at a time 3 lat temu
  oobabooga 0f01a3b1fa Implement text streaming (#10) 3 lat temu
  oobabooga ca13acdfa0 Ensure that the chat prompt will always contain < 2048 tokens 3 lat temu
  oobabooga 6456777b09 Clean things up 3 lat temu
  oobabooga 3a99b2b030 Change a truncation parameter 3 lat temu
  oobabooga 54bf55372b Truncate prompts to 2048 characters 3 lat temu
  oobabooga c7a2818665 Grammar 3 lat temu
  oobabooga d973897021 Typo 3 lat temu
  oobabooga 47a20638de Don't need this 3 lat temu
  oobabooga b55486fa00 Reorganize things 3 lat temu