llama_cpp_for_radxa_dragon_.../examples
2025-04-08 15:49:13 +02:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding
eval-callback
export-lora
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench
llama.android cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
llama.swiftui
llava llava: add more helper functions to check projector types in clip context (#12824) 2025-04-08 15:49:13 +02:00
lookahead
lookup
main
parallel
passkey
perplexity hellaswag: display estimated score confidence interval (#12797) 2025-04-07 18:47:08 +03:00
quantize
quantize-stats
retrieval
rpc
run cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
save-load-state
server server : webui : Improve Chat Input with Auto-Sizing Textarea (#12785) 2025-04-08 11:14:59 +02:00
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
tokenize
tts
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh