llama_cpp_for_radxa_dragon_.../tools
2025-09-12 17:02:55 +03:00
..
batched-bench batched-bench : fix llama_synchronize usage during prompt processing (#15835) 2025-09-08 10:27:07 +03:00
cvector-generator
export-lora mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (#14503) 2025-07-25 13:08:04 +02:00
gguf-split
imatrix imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076) 2025-08-04 23:26:52 +02:00
llama-bench ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
main cli : change log to warning to explain reason for stopping (#15604) 2025-08-28 10:48:20 +03:00
mtmd requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
perplexity perplexity: give more information about constraints on failure (#15303) 2025-08-14 09:16:32 +03:00
quantize llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
rpc
run
server server : adjust prompt similarity thold + add logs (#15913) 2025-09-12 17:02:55 +03:00
tokenize
tts sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
CMakeLists.txt