llama_cpp_for_radxa_dragon_.../tools
2025-12-23 21:49:05 +01:00
..
batched-bench tool/ex/tests: consistently free ctx, then model (#18168) 2025-12-22 11:00:37 +01:00
cli gen-docs: automatically update markdown file (#18294) 2025-12-22 19:30:19 +01:00
completion gen-docs: automatically update markdown file (#18294) 2025-12-22 19:30:19 +01:00
cvector-generator common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
export-lora
fit-params llama-fit-params: QoL impr. for prints/errors (#18089) 2025-12-17 00:03:19 +01:00
gguf-split
imatrix common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
llama-bench tool/ex/tests: consistently free ctx, then model (#18168) 2025-12-22 11:00:37 +01:00
mtmd model : add ASR support for LFM2-Audio-1.5B (conformer) (#18106) 2025-12-19 00:18:01 +01:00
perplexity common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
quantize
rpc
run
server server: return_progress to also report 0% processing state (#18305) 2025-12-23 21:49:05 +01:00
tokenize
tts common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
CMakeLists.txt llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00