llama_cpp_for_radxa_dragon_.../tests
Georgi Gerganov f9cd68398b
sampling : make sure samplers return at least 1 token (#13822)
* sampling : min-p should always return at least one token

ggml-ci

* sampling : same for typical sampling

* tests : sampling tests use min_keep == 0

ggml-ci
2025-05-27 12:07:52 +03:00
..
.gitignore
CMakeLists.txt tests : improve UGM tokenizer test coverage (#13773) 2025-05-25 16:22:29 +02:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
test-arg-parser.cpp tests : avoid github urls due to throttling (#13654) 2025-05-20 12:03:17 +02:00
test-autorelease.cpp
test-backend-ops.cpp llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
test-barrier.cpp
test-c.c
test-chat-parser.cpp server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
test-chat-template.cpp llama-chat : reset glmedge chat template (#13253) 2025-05-02 11:06:09 +02:00
test-chat.cpp server: fix streaming crashes (#13786) 2025-05-26 16:03:57 +01:00
test-double-float.cpp
test-gbnf-validator.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-gguf.cpp
test-grammar-integration.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-grammar-llguidance.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-grammar-parser.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-json-partial.cpp server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
test-json-schema-to-grammar.cpp grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
test-llama-grammar.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-log.cpp
test-lora-conversion-inference.sh
test-model-load-cancel.cpp
test-mtmd-c-api.c mtmd : add C public API (#13184) 2025-05-04 23:43:42 +02:00
test-opt.cpp llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
test-quantize-fns.cpp tests : fix test-quantize-fns to init the CPU backend (#12306) 2025-03-10 14:07:15 +02:00
test-quantize-perf.cpp
test-quantize-stats.cpp docker : do not build tests (#13204) 2025-04-30 10:44:07 +02:00
test-regex-partial.cpp common: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
test-rope.cpp
test-sampling.cpp sampling : make sure samplers return at least 1 token (#13822) 2025-05-27 12:07:52 +03:00
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-tokenizer-1-spm.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-tokenizer-random.py