llama_cpp_for_radxa_dragon_.../scripts
2023-12-13 12:10:10 -05:00
..
build-info.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
build-info.sh
convert-gg.sh
gen-build-info-cpp.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
get-flags.mk build : detect host compiler and cuda compiler separately (#4414) 2023-12-13 12:10:10 -05:00
get-wikitext-2.sh
LlamaConfig.cmake.in
qnt-all.sh
run-all-perf.sh
run-all-ppl.sh
server-llm.sh
sync-ggml.sh sync : ggml (new ops, tests, backend, etc.) (#4359) 2023-12-07 22:26:54 +02:00
verify-checksum-models.py