llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius ed8aa63320
model-conversion : pass config to from_pretrained (#16963)
This commit modifies the script `run-org-model.py` to ensure that the
model configuration is explicitly passed to the `from_pretrained` method
when loading the model. It also removes a duplicate configuration
loading which was a mistake.

The motivation for this change is that enables the config object to be
modified and then passed to the model loading function, which can be
useful when testing new models.
2025-11-03 18:01:59 +01:00
..
batched
batched.swift
convert-llama2c-to-ggml
deprecation-warning
diffusion
embedding embedding: add raw option for --embd-output-format (#16541) 2025-10-28 12:51:41 +02:00
eval-callback devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00
gen-docs
gguf
gguf-hash
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : pass config to from_pretrained (#16963) 2025-11-03 18:01:59 +01:00
parallel
passkey
retrieval
save-load-state
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
training
CMakeLists.txt codeowners : update + cleanup (#16174) 2025-09-22 18:20:21 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : support array references in json schema (#16792) 2025-10-28 09:37:52 +01:00
llama.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh