llama_cpp_for_radxa_dragon_.../scripts
Djip007 2cd43f4900
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
* more perfo with llamafile tinyblas on x86_64.

- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth

simple tinyblas dispache and more cache freindly

* tinyblas dynamic dispaching

* sgemm: add M blocs.

* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2

* remove not stable test
2024-12-24 18:54:49 +01:00
..
build-info.sh
check-requirements.sh
ci-run.sh
compare-commits.sh scripts : change build path to "build-bench" for compare-commits.sh (#10836) 2024-12-15 18:44:47 +02:00
compare-llama-bench.py ggml : more perfo with llamafile tinyblas on x86_64 (#10714) 2024-12-24 18:54:49 +01:00
debug-test.sh
gen-authors.sh
gen-unicode-data.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
hf.sh ggml : more perfo with llamafile tinyblas on x86_64 (#10714) 2024-12-24 18:54:49 +01:00
install-oneapi.bat
qnt-all.sh
run-all-perf.sh
run-all-ppl.sh
sync-ggml-am.sh scripts : remove amx sync 2024-12-03 20:04:49 +02:00
sync-ggml.last sync : ggml 2024-12-17 18:36:02 +02:00
sync-ggml.sh scripts : remove amx sync 2024-12-03 20:04:49 +02:00
verify-checksum-models.py
xxd.cmake