llama_cpp_for_radxa_dragon_.../tools
Georgi Gerganov 7956bb4d7f
bench : cache the llama_context state at computed depth (#16944)
* bench : cache llama_context state at depth

* cont : handle failures to restore the old state

* cont : print information when the state is being reused
2025-11-07 21:23:11 +02:00
..
batched-bench scripts : add script to bench models (#16894) 2025-11-02 00:15:31 +02:00
cvector-generator
export-lora
gguf-split
imatrix
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main
mtmd hparams : add n_embd_inp() to support extended embed (#16928) 2025-11-07 19:27:58 +01:00
perplexity
quantize
rpc
run
server kv-cache : pad the cache size to 256 for performance (#17046) 2025-11-07 20:03:25 +02:00
tokenize
tts
CMakeLists.txt