llama_cpp_for_radxa_dragon_.../tests
Diego Devesa 6adc3c3ebc
llama : add thread safety test (#14035)
* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-16 08:11:43 -07:00
..
.gitignore
CMakeLists.txt llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-arg-parser.cpp
test-autorelease.cpp
test-backend-ops.cpp
test-barrier.cpp
test-c.c
test-chat-parser.cpp
test-chat-template.cpp
test-chat.cpp
test-double-float.cpp
test-gbnf-validator.cpp
test-gguf.cpp
test-grammar-integration.cpp
test-grammar-llguidance.cpp
test-grammar-parser.cpp
test-json-partial.cpp
test-json-schema-to-grammar.cpp
test-llama-grammar.cpp
test-log.cpp
test-lora-conversion-inference.sh
test-model-load-cancel.cpp
test-mtmd-c-api.c
test-opt.cpp
test-quantize-fns.cpp
test-quantize-perf.cpp
test-quantize-stats.cpp
test-regex-partial.cpp
test-rope.cpp
test-sampling.cpp
test-thread-safety.cpp llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp
test-tokenizer-1-spm.cpp
test-tokenizer-random.py
test-tokenizers-repo.sh