A frontend for chatting/RP with LLMs, designed to be a TUI version of SillyTavern.
Still in development.
- ncurses
- curl
- GCC 13+ or Clang 16+ (for C++23 support)
Platform-specific installation instructions
brew install cmake ncurses curl
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runTODO
sudo apt update && sudo apt install -y build-essential cmake libncursesw5-dev libcurl4-openssl-dev
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runIf you're using older GCC versions, you will need to do this first (ubuntu):
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-13 g++-13
./tools/setup_gcc13.shsudo pacman -S base-devel cmake ncurses curl git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runsudo dnf install cmake ncurses-devel libcurl-devel git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runsudo zypper install cmake ncurses-devel libcurl-devel git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runsudo emerge -av dev-vcs/git sys-devel/cmake sys-libs/ncurses sys-libs/libcurl
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runapk add cmake ncurses-dev curl-dev git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runUntested, but:
pkg install cmake ncurses curl git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runpkg install build-essential cmake ncurses libcurl git
git clone https://github.com/AlpinDale/sillytui.git && cd sillytui
make runSee /help for available commands.
We have a self-contained tokenization library that supports the following tokenizers:
- tiktoken
- gpt2bpe
- sentencepiece
You can test it like this:
make example ARGS="--list" # get a list of available tokenizers
# output:
Available tokenizers:
openai OpenAI cl100k (GPT-4, GPT-3.5)
openai-o200k OpenAI o200k (GPT-4o)
qwen3 Qwen 3 (151k vocab)
llama3 Llama 3 / 3.1 (128k vocab)
glm4 GLM-4.5 (151k vocab)
deepseek DeepSeek R1 (128k vocab)
make example ARGS="-t deepseek 'Hello, world!'"
# output:
Tokenizer: deepseek (DeepSeek R1 (128k vocab))
Text: "Hello, world!"
Token count: 4
Tokens: [19923, 14, 2058, 3]
Decoded tokens:
[0] 19923 -> "Hello"
[1] 14 -> ","
[2] 2058 -> "\xc4\xa0world"
[3] 3 -> "!"