llama_cpp_for_radxa_dragon_.../tools
2026-02-12 19:55:51 +01:00
..
batched-bench
cli common : use two decimal places for float arg help messages (#19048) 2026-01-25 07:31:42 +01:00
completion completion : simplify batch (embd) processing (#19286) 2026-02-04 05:43:28 +01:00
cvector-generator docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
export-lora docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
fit-params llama-fit-params: keep explicit --ctx-size 0 (#19070) 2026-01-24 22:13:08 +01:00
gguf-split
imatrix
llama-bench
mtmd model: Add Kimi-K2.5 support (#19170) 2026-02-11 16:47:30 +01:00
perplexity docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
quantize llama-quantize : cleanup --help output (#19317) 2026-02-08 09:22:38 +02:00
rpc rpc : update from common.cpp (#19400) 2026-02-08 09:06:45 +01:00
server webui: Add switcher to Chat Message UI to show raw LLM output (#19571) 2026-02-12 19:55:51 +01:00
tokenize
tts model : fix wavtokenizer embedding notions (#19479) 2026-02-11 07:52:20 +02:00
CMakeLists.txt