llama_cpp_for_radxa_dragon_.../tests
Xuan Son Nguyen 1b9ae5189c
common : refactor arg parser (#9308)
* (wip) argparser v3

* migrated

* add test

* handle env

* fix linux build

* add export-docs example

* fix build (2)

* skip build test-arg-parser on windows

* update server docs

* bring back missing --alias

* bring back --n-predict

* clarify test-arg-parser

* small correction

* add comments

* fix args with 2 values

* refine example-specific args

* no more lamba capture

Co-authored-by: slaren@users.noreply.github.com

* params.sparams

* optimize more

* export-docs --> gen-docs
2024-09-07 20:43:51 +02:00
..
.gitignore
CMakeLists.txt common : refactor arg parser (#9308) 2024-09-07 20:43:51 +02:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-arg-parser.cpp common : refactor arg parser (#9308) 2024-09-07 20:43:51 +02:00
test-autorelease.cpp
test-backend-ops.cpp ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
test-c.c
test-chat-template.cpp tests : fix printfs (#8068) 2024-07-25 18:58:04 +03:00
test-double-float.cpp ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
test-grad0.cpp sync : ggml 2024-08-27 22:41:27 +03:00
test-grammar-integration.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-grammar-parser.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-json-schema-to-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-lora-conversion-inference.sh lora : fix llama conversion script with ROPE_FREQS (#9117) 2024-08-23 12:58:53 +02:00
test-model-load-cancel.cpp
test-opt.cpp
test-quantize-fns.cpp ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
test-quantize-perf.cpp ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
test-rope.cpp Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
test-sampling.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-tokenizer-0.cpp llama : fix pre-tokenization of non-special added tokens (#8228) 2024-07-13 23:35:10 -04:00
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp Detokenizer fixes (#8039) 2024-07-05 19:01:35 +02:00
test-tokenizer-1-spm.cpp Detokenizer fixes (#8039) 2024-07-05 19:01:35 +02:00
test-tokenizer-random.py llama : fix pre-tokenization of non-special added tokens (#8228) 2024-07-13 23:35:10 -04:00