llama_cpp_for_radxa_dragon_.../examples
frob d5fe4e81bd
grammar : handle maxItems == 0 in JSON schema (#13117)
Co-authored-by: Richard Lyons <frob@cloudstaff.com>
2025-04-26 10:10:20 +02:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding embeddings : fix batch sizes (#13076) 2025-04-24 22:29:22 +03:00
eval-callback
export-lora
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench
llama.android
llama.swiftui
llava clip : fix pixtral on some GPU backends (#13097) 2025-04-25 14:31:42 +02:00
lookahead
lookup
main main : Fix Ctrl+D/newline handling (#12951) 2025-04-18 22:02:55 +02:00
parallel
passkey
perplexity
quantize quantize: Handle user-defined quantization levels for additional tensors (#12511) 2025-04-13 21:29:28 +03:00
retrieval
rpc rpc : add command line option for number of threads for the CPU backend (#13060) 2025-04-23 10:32:49 +03:00
run contrib: support modelscope community (#12664) 2025-04-11 14:01:56 +02:00
save-load-state
server grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl dsiable curl lib check, this action is missed by commit bd3f59f812 (#12761) (#12937) 2025-04-14 18:19:07 +08:00
tokenize
tts
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh