llama_cpp_for_radxa_dragon_.../tools
Xuan-Son Nguyen e509411cf1
server: enable jinja by default, update docs (#17524)
* server: enable jinja by default, update docs

* fix tests
2025-11-27 01:02:50 +01:00
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator
export-lora
gguf-split
imatrix
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main common : more accurate sampling timing (#17382) 2025-11-20 13:40:10 +02:00
mtmd clip: (minicpmv) fix resampler kq_scale (#17516) 2025-11-26 21:44:07 +01:00
perplexity
quantize
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run
server server: enable jinja by default, update docs (#17524) 2025-11-27 01:02:50 +01:00
tokenize
tts
CMakeLists.txt