llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius 9012c50fc8
model-conversion : fix mmproj output file name [no ci] (#22274)
* model-conversion : fix mmproj output file name [no ci]

This commit updates the convert-model.sh script to properly handle
mmproj output files.

The motivation for this that currently the same name as the original
model is used as the mmproj file, which causes the original model to
be overwritten and no mmproj-<model_name>.gguf to be created.

* model-conversion : use MODEL_NAME [no ci]
2026-04-23 15:07:38 +02:00
..
batched
batched.swift
convert-llama2c-to-ggml
debug
deprecation-warning
diffusion
embedding
eval-callback
gen-docs
gguf
gguf-hash
idle
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : fix mmproj output file name [no ci] (#22274) 2026-04-23 15:07:38 +02:00
parallel
passkey
retrieval
save-load-state
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple speculative-simple : add checkpoint support (#22227) 2026-04-22 15:44:45 +03:00
sycl
training
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh