llama_cpp_for_radxa_dragon_.../examples
Daniel Bevenius 5a91109a5d
model-conversion : add trust_remote_code for orig model run [no ci] (#16751)
This commit add the trust_remote_code=True argument when loading models
using AutoConfig, AutoTokenizer, and AutoModelForCausalLM for the run
original model script.

The motivation for this is that some models require custom code to be
loaded properly, and setting trust_remote_code=True avoids a prompt
asking for user confirmation:
```console
(venv) $ make causal-run-original-model
The repository /path/to/model contains custom code which must be
executed to correctly load the model. You can inspect the repository
content at /path/to/model.

Do you wish to run the custom code? [y/N] N
```

Having this as the default seems like a safe choice as we have to clone
or download the models we convert and would be expecting to run any
custom code they have.
2025-10-24 12:02:02 +02:00
..
batched
batched.swift
convert-llama2c-to-ggml gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00
deprecation-warning
diffusion Add LLaDA-7b-MoE diffusion model (#16003) 2025-09-16 10:38:28 +08:00
embedding llama : add support for qwen3 reranker (#15824) 2025-09-25 11:53:09 +03:00
eval-callback devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00
gen-docs
gguf
gguf-hash
llama.android
llama.swiftui
lookahead
lookup
model-conversion model-conversion : add trust_remote_code for orig model run [no ci] (#16751) 2025-10-24 12:02:02 +02:00
parallel
passkey
retrieval
save-load-state
simple examples : support encoder-decoder models in the simple example (#16002) 2025-09-17 10:29:00 +03:00
simple-chat
simple-cmake-pkg
speculative sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative-simple
sycl
training
CMakeLists.txt codeowners : update + cleanup (#16174) 2025-09-22 18:20:21 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py json : support enum values within allOf (#15830) 2025-09-08 16:14:32 -05:00
llama.vim llama : remove KV cache defragmentation logic (#15473) 2025-08-22 12:22:13 +03:00
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh