llama_cpp_for_radxa_dragon_.../scripts
Aman Gupta 81ab64f3c8
ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934)
* ggml-cuda: add split-wise cuda graph

* add n-cpu-moe compare_llama_bench.py

* fix hip/musa builds
2026-01-24 14:25:20 +08:00
..
apple
jinja scripts : add Jinja tester PySide6 simple app (#15756) 2025-09-05 01:05:12 +02:00
snapdragon hexagon: support for OP_CPY, host buffers now optional, hvx-utils refactoring and optimizations (#18822) 2026-01-14 21:46:12 -08:00
bench-models.sh scripts : add script to bench models (#16894) 2025-11-02 00:15:31 +02:00
build-info.sh
check-requirements.sh
compare-commits.sh scripts: add sqlite3 check for compare-commits.sh (#15633) 2025-08-28 19:23:22 +08:00
compare-llama-bench.py ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934) 2026-01-24 14:25:20 +08:00
compare-logprobs.py scripts: add script to compare logprobs of llama.cpp against other frameworks (#17947) 2025-12-13 22:33:29 +01:00
create_ops_docs.py
debug-test.sh refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
fetch_server_test_models.py
gen-authors.sh
gen-unicode-data.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
get_chat_template.py
hf.sh
install-oneapi.bat
pr2wt.sh scripts : follow api redirects in pr2wt.sh (#18739) 2026-01-10 16:04:05 +01:00
serve-static.js refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
server-bench.py llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
sync-ggml-am.sh
sync-ggml.last sync : ggml 2025-12-31 18:54:43 +02:00
sync-ggml.sh
sync_vendor.py common : implement new jinja template engine (#18462) 2026-01-16 11:22:06 +01:00
tool_bench.py refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
tool_bench.sh
verify-checksum-models.py
xxd.cmake