Experimenting with LLMs

Using the llama-cpp inference runtime with smaller GGUF transformer models on old dusty hardware.

drawing
drawing

Context

Install llama-cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

Build without GPU

mkdir build
cd build
cmake .. -DLLAMA_CUBLAS=OFF

Get model that will work with low resources

Start interface

./build/bin/llama-cli \
  -m models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf \
  -c 1024 \
  -n 256 \
  -t 2

image info

Prompt & Output

image info

Function works but examples are wrong.