llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius 6ab8eacddf
examples : add -kvu to batched usage example [no ci] (#17469)
This commit adds the --kv-unified flag to the usage example
in the README.md file for the batched example.

The motivation for this is that without this flag the example will fail
with the following error:
```console
Hello my name is
split_equal: sequential split is not supported when there are coupled
sequences in the input batch (you may need to use the -kvu flag)
decode: failed to find a memory slot for batch of size 4
main: llama_decode() failed
```
2025-11-24 15:38:45 +02:00
..
batched examples : add -kvu to batched usage example [no ci] (#17469) 2025-11-24 15:38:45 +02:00
batched.swift
convert-llama2c-to-ggml
deprecation-warning
diffusion models : Added support for RND1 Diffusion Language Model (#17433) 2025-11-24 14:16:56 +08:00
embedding embedding: add raw option for --embd-output-format (#16541) 2025-10-28 12:51:41 +02:00
eval-callback common : more accurate sampling timing (#17382) 2025-11-20 13:40:10 +02:00
gen-docs
gguf examples(gguf): GGUF example outputs (#17025) 2025-11-05 19:58:16 +02:00
gguf-hash
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : pass config to from_pretrained (#16963) 2025-11-03 18:01:59 +01:00
parallel
passkey
retrieval
save-load-state
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
training
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh