llama_cpp_for_radxa_dragon_.../scripts
Aman Gupta 81ab64f3c8
ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934)
* ggml-cuda: add split-wise cuda graph

* add n-cpu-moe compare_llama_bench.py

* fix hip/musa builds
2026-01-24 14:25:20 +08:00
..
apple
jinja
snapdragon hexagon: support for OP_CPY, host buffers now optional, hvx-utils refactoring and optimizations (#18822) 2026-01-14 21:46:12 -08:00
bench-models.sh
build-info.sh
check-requirements.sh
compare-commits.sh
compare-llama-bench.py ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934) 2026-01-24 14:25:20 +08:00
compare-logprobs.py
create_ops_docs.py
debug-test.sh refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
fetch_server_test_models.py
gen-authors.sh
gen-unicode-data.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
get_chat_template.py
hf.sh
install-oneapi.bat
pr2wt.sh
serve-static.js refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
server-bench.py
sync-ggml-am.sh
sync-ggml.last
sync-ggml.sh
sync_vendor.py common : implement new jinja template engine (#18462) 2026-01-16 11:22:06 +01:00
tool_bench.py refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
tool_bench.sh
verify-checksum-models.py
xxd.cmake