llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius fd1085ffb7
model-conversion : use CONVERTED_MODEL value for converted model [no ci] (#17984)
* model-conversion : use CONVERTED_MODEL value for converted model [no ci]

This commit updates the model verification scripts to use the
CONVERTED_MODEL environment variable instead of using the MODEL_PATH
(the original model path) as the basis for the converted model file
name.

The motivation for this that currently if the converted model file name
differs from the original model directory/name the verification scripts
will look for the wrong .bin files that were generating when running the
models.
For example, the following steps were not possible:
```console
(venv) $ huggingface-cli download google/gemma-3-270m-it --local-dir ggml-org/gemma-3-270m
(venv) $ python3 convert_hf_to_gguf.py ggml-org/gemma-3-270m --outfile test-bf16.gguf --outtype bf16
(venv) $ cd examples/model-conversion/
(venv) $ export MODEL_PATH=../../ggml-org/gemma-3-270m
(venv) $ export CONVERTED_MODEL=../../test-bf16.gguf
(venv) $ make causal-verify-logits
...
Data saved to data/llamacpp-test-bf16.bin
Data saved to data/llamacpp-test-bf16.txt
Error: llama.cpp logits file not found: data/llamacpp-gemma-3-270m.bin
Please run scripts/run-converted-model.sh first to generate this file.
make: *** [Makefile:62: causal-verify-logits] Error 1
```

With the changes in this commit, the above steps will now work as
expected.
2025-12-13 08:34:26 +01:00
..
batched examples : add -kvu to batched usage example [no ci] (#17469) 2025-11-24 15:38:45 +02:00
batched.swift
convert-llama2c-to-ggml
deprecation-warning
diffusion
embedding ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched (#17276) 2025-11-28 17:33:23 +02:00
eval-callback
gen-docs common: support negated args (#17919) 2025-12-12 23:58:53 +01:00
gguf
gguf-hash
idle metal : add residency sets keep-alive heartbeat (#17766) 2025-12-05 19:38:54 +02:00
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : use CONVERTED_MODEL value for converted model [no ci] (#17984) 2025-12-13 08:34:26 +01:00
parallel
passkey
retrieval
save-load-state metal : fix build(#17799) 2025-12-06 09:33:59 +02:00
simple
simple-chat
simple-cmake-pkg examples : add missing code block end marker [no ci] (#17756) 2025-12-04 14:17:30 +01:00
speculative
speculative-simple
sycl sycl : support to malloc memory on device more than 4GB, update the doc and script (#17566) 2025-11-29 14:59:44 +02:00
training
CMakeLists.txt metal : add residency sets keep-alive heartbeat (#17766) 2025-12-05 19:38:54 +02:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py common : fix json schema with '\' in literals (#17307) 2025-11-29 17:06:32 +01:00
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh