llama_cpp_for_radxa_dragon_.../tools
2025-11-14 01:19:08 +01:00
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator
export-lora
gguf-split
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main memory: Hybrid context shift (#17009) 2025-11-10 17:14:23 +02:00
mtmd cmake : add version to all shared object files (#17091) 2025-11-11 13:19:50 +02:00
perplexity
quantize
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server Better UX for handling multiple attachments in WebUI (#17246) 2025-11-14 01:19:08 +01:00
tokenize
tts
CMakeLists.txt