llama_cpp_for_radxa_dragon_.../tests
Akarshan Biswas 6e84b0ab8e
SYCL : SOFTMAX F16 mask support and other fixes (#11261)
Implemented ggml_sycl_op_soft_max() F16 src1(mask) support for which a pragma deprecation warning was added during #5021.
To do this, had to decouple it from ggml_sycl_op_flatten which always considered src1 to be of fp32 type(many OP functions are dependent on it).

* SYCL: SOFTMAX F16 mask support and other fixes

* test-backend-ops: Add F16 mask test cases
2025-01-28 09:56:58 +00:00
..
.gitignore
CMakeLists.txt cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-arg-parser.cpp
test-autorelease.cpp
test-backend-ops.cpp SYCL : SOFTMAX F16 mask support and other fixes (#11261) 2025-01-28 09:56:58 +00:00
test-barrier.cpp
test-c.c
test-chat-template.cpp Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
test-double-float.cpp
test-gguf.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00
test-grammar-integration.cpp
test-grammar-parser.cpp
test-json-schema-to-grammar.cpp
test-llama-grammar.cpp
test-log.cpp
test-lora-conversion-inference.sh ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
test-model-load-cancel.cpp
test-opt.cpp
test-quantize-fns.cpp
test-quantize-perf.cpp
test-rope.cpp
test-sampling.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp
test-tokenizer-1-spm.cpp
test-tokenizer-random.py