| .. |
|
.gitignore
|
|
|
|
CMakeLists.txt
|
sampling : support for llguidance grammars (#10224)
|
2025-02-02 09:55:32 +02:00 |
|
get-model.cpp
|
|
|
|
get-model.h
|
|
|
|
run-json-schema-to-grammar.mjs
|
|
|
|
test-arg-parser.cpp
|
|
|
|
test-autorelease.cpp
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
test-backend-ops.cpp
|
metal : improve FA + improve MoE (#12612)
|
2025-03-28 20:21:59 +02:00 |
|
test-barrier.cpp
|
|
|
|
test-c.c
|
|
|
|
test-chat-template.cpp
|
llama-chat : Add Yandex instruct model template support (#12621)
|
2025-03-30 20:12:03 +02:00 |
|
test-chat.cpp
|
server: extract <think> tags from qwq outputs (#12297)
|
2025-03-10 10:59:03 +00:00 |
|
test-double-float.cpp
|
|
|
|
test-gguf.cpp
|
cleanup: fix compile warnings associated with gnu_printf (#11811)
|
2025-02-12 10:06:53 -04:00 |
|
test-grammar-integration.cpp
|
sampling : support for llguidance grammars (#10224)
|
2025-02-02 09:55:32 +02:00 |
|
test-grammar-llguidance.cpp
|
upgrade to llguidance 0.7.10 (#12576)
|
2025-03-26 11:06:09 -07:00 |
|
test-grammar-parser.cpp
|
|
|
|
test-json-schema-to-grammar.cpp
|
tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
2025-03-05 13:05:13 +00:00 |
|
test-llama-grammar.cpp
|
|
|
|
test-log.cpp
|
|
|
|
test-lora-conversion-inference.sh
|
ci : use -no-cnv in gguf-split tests (#11254)
|
2025-01-15 18:28:35 +02:00 |
|
test-model-load-cancel.cpp
|
llama : update llama_model API names (#11063)
|
2025-01-06 10:55:18 +02:00 |
|
test-opt.cpp
|
|
|
|
test-quantize-fns.cpp
|
tests : fix test-quantize-fns to init the CPU backend (#12306)
|
2025-03-10 14:07:15 +02:00 |
|
test-quantize-perf.cpp
|
|
|
|
test-rope.cpp
|
|
|
|
test-sampling.cpp
|
sampling: add Top-nσ sampler (#11223)
|
2025-02-13 08:45:57 +02:00 |
|
test-tokenizer-0.cpp
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
test-tokenizer-0.py
|
|
|
|
test-tokenizer-0.sh
|
|
|
|
test-tokenizer-1-bpe.cpp
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
test-tokenizer-1-spm.cpp
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
test-tokenizer-random.py
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |