llama_cpp_for_radxa_dragon_.../examples
Sam Malayek 1c1409e131
embedding: add raw option for --embd-output-format (#16541)
* Add --embd-output-format raw for plain numeric embedding output

This new option outputs embeddings as raw space-separated floats, without JSON or 'embedding N:' prefixes. Useful for downstream vector pipelines and scripting.

* Move raw output handling into format handling section

* Move raw output handling into else-if block with other format handlers

* Use LOG instead of printf for raw embedding output

* docs: document 'raw' embedding output format in arg.cpp and README
2025-10-28 12:51:41 +02:00
..
batched
batched.swift
convert-llama2c-to-ggml
deprecation-warning
diffusion
embedding embedding: add raw option for --embd-output-format (#16541) 2025-10-28 12:51:41 +02:00
eval-callback devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00
gen-docs
gguf
gguf-hash
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : add trust_remote_code for orig model run [no ci] (#16751) 2025-10-24 12:02:02 +02:00
parallel
passkey
retrieval
save-load-state
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
training
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : support array references in json schema (#16792) 2025-10-28 09:37:52 +01:00
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh