llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius aa3ee0eb0b
model-conversion : add embedding prompt file support (#15871)
This commit adds support for passing a prompt file to the model
conversion targets/scripts. It also updates the logits.cpp to print out
embedding information in the same format as when running the original
embedding model.

The motivation for this is that it allows us to pass files of different
sizes when running the converted models and validating the logits.

This can be particularly important when testing the sliding window
functionality of models where the sequence length needs to exceed a
certain number of tokens to trigger the sliding window logic.
2025-09-25 12:02:36 +02:00
..
batched
batched.swift
convert-llama2c-to-ggml gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00
deprecation-warning
diffusion Add LLaDA-7b-MoE diffusion model (#16003) 2025-09-16 10:38:28 +08:00
embedding llama : add support for qwen3 reranker (#15824) 2025-09-25 11:53:09 +03:00
eval-callback model-conversion : add extra debugging support for model conversion (#15877) 2025-09-09 06:05:55 +02:00
gen-docs
gguf
gguf-hash
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : add embedding prompt file support (#15871) 2025-09-25 12:02:36 +02:00
parallel
passkey
retrieval
save-load-state
simple examples : support encoder-decoder models in the simple example (#16002) 2025-09-17 10:29:00 +03:00
simple-chat
simple-cmake-pkg
speculative sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative-simple
sycl
training
CMakeLists.txt codeowners : update + cleanup (#16174) 2025-09-22 18:20:21 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py json : support enum values within allOf (#15830) 2025-09-08 16:14:32 -05:00
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh