llama_cpp_for_radxa_dragon_.../scripts
Adrien Gallouët 8e649571cd
vendor : update cpp-httplib to 0.30.1 (#18771)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-12 15:58:52 +01:00
..
apple
jinja scripts : add Jinja tester PySide6 simple app (#15756) 2025-09-05 01:05:12 +02:00
snapdragon Hexagon add support for f16/f32 flash attention, scale, set-rows and improve f16/32 matmul (#18611) 2026-01-06 17:38:29 -08:00
bench-models.sh scripts : add script to bench models (#16894) 2025-11-02 00:15:31 +02:00
build-info.sh
check-requirements.sh
compare-commits.sh scripts: add sqlite3 check for compare-commits.sh (#15633) 2025-08-28 19:23:22 +08:00
compare-llama-bench.py scripts: strip "AMD Instinct" from GPU name (#15668) 2025-08-29 22:04:08 +02:00
compare-logprobs.py scripts: add script to compare logprobs of llama.cpp against other frameworks (#17947) 2025-12-13 22:33:29 +01:00
create_ops_docs.py
debug-test.sh
fetch_server_test_models.py
gen-authors.sh
gen-unicode-data.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
get_chat_template.py
hf.sh
install-oneapi.bat
pr2wt.sh scripts : follow api redirects in pr2wt.sh (#18739) 2026-01-10 16:04:05 +01:00
serve-static.js ggml webgpu: add support for emscripten builds (#17184) 2025-12-03 10:25:34 +01:00
server-bench.py llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
sync-ggml-am.sh
sync-ggml.last sync : ggml 2025-12-31 18:54:43 +02:00
sync-ggml.sh
sync_vendor.py vendor : update cpp-httplib to 0.30.1 (#18771) 2026-01-12 15:58:52 +01:00
tool_bench.py server : speed up tests (#15836) 2025-09-06 14:45:24 +02:00
tool_bench.sh
verify-checksum-models.py
xxd.cmake