llama_cpp_for_radxa_dragon_.../tests
Georgi Gerganov dfcd53f7ec
metal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220)
* metal : fuse NORM + MUL + ADD

* metal : support norms of non-multiple of 4

* cont : fix comment [no ci]
2025-09-25 11:30:16 +03:00
..
.gitignore gitignore : Ignore vim swap files in tests (#15901) 2025-09-10 14:28:47 +03:00
CMakeLists.txt ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-alloc.cpp ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
test-arg-parser.cpp
test-autorelease.cpp
test-backend-ops.cpp metal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220) 2025-09-25 11:30:16 +03:00
test-barrier.cpp
test-c.c
test-chat-parser.cpp chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533) 2025-09-08 16:59:48 +02:00
test-chat-template.cpp model : add support for Seed-OSS (#15490) 2025-08-23 15:21:52 +02:00
test-chat.cpp chat: Fix streaming parser for granite models (#15682) 2025-09-19 09:57:30 -06:00
test-double-float.cpp
test-gbnf-validator.cpp
test-gguf.cpp
test-grammar-integration.cpp
test-grammar-llguidance.cpp
test-grammar-parser.cpp
test-json-partial.cpp
test-json-schema-to-grammar.cpp json : support enum values within allOf (#15830) 2025-09-08 16:14:32 -05:00
test-llama-grammar.cpp
test-log.cpp
test-lora-conversion-inference.sh
test-model-load-cancel.cpp
test-mtmd-c-api.c
test-opt.cpp tests : fix test-opt with GGML_BACKEND_DL (#15599) 2025-08-26 22:14:38 +02:00
test-quantize-fns.cpp
test-quantize-perf.cpp ci: run the x64 and arm ci on the github machines instead (#16183) 2025-09-25 08:06:06 +03:00
test-quantize-stats.cpp
test-regex-partial.cpp
test-rope.cpp
test-sampling.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
test-thread-safety.cpp
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp
test-tokenizer-1-spm.cpp
test-tokenizer-random.py requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
test-tokenizers-repo.sh