llama_cpp_for_radxa_dragon_.../examples
2025-01-17 14:57:56 +02:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding
eval-callback
export-lora
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
gritlm
imatrix
infill
jeopardy
llama-bench rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
llama.android llama.android: add field formatChat to control whether to parse special tokens when send message (#11270) 2025-01-17 14:57:56 +02:00
llama.swiftui
llava
lookahead
lookup
main cli : auto activate conversation mode if chat template is available (#11214) 2025-01-13 20:18:12 +01:00
main-cmake-pkg
parallel
passkey
perplexity
quantize ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
quantize-stats
retrieval
rpc
run
save-load-state
server server : Improve code snippets direction between RTL text (#11221) 2025-01-14 11:39:33 +01:00
simple
simple-chat
speculative
speculative-simple
sycl
tokenize
tts examples : add embd_to_audio to tts-outetts.py [no ci] (#11235) 2025-01-15 05:44:38 +01:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh