llama_cpp_for_radxa_dragon_.../examples
Sigbjørn Skjæret 2aa777d86d
examples : switch retrieval to llama_encode (#13685)
* switch retrieval to llama_encode

* enable --no-warmup for retrieval
2025-05-21 16:57:38 +02:00
..
batched
batched.swift
convert-llama2c-to-ggml
deprecation-warning
embedding context : allow cache-less context for embeddings (#13108) 2025-05-08 14:28:33 +03:00
eval-callback
gen-docs
gguf
gguf-hash
gritlm
jeopardy
llama.android cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
llama.swiftui
lookahead llama : remove llama_kv_cache_view API + remove deprecated (#13653) 2025-05-20 16:13:16 +03:00
lookup llama : remove llama_kv_cache_view API + remove deprecated (#13653) 2025-05-20 16:13:16 +03:00
parallel llama : remove llama_kv_cache_view API + remove deprecated (#13653) 2025-05-20 16:13:16 +03:00
passkey
retrieval examples : switch retrieval to llama_encode (#13685) 2025-05-21 16:57:38 +02:00
save-load-state
simple fix: check model pointer validity before use (#13631) 2025-05-19 13:25:41 +03:00
simple-chat kv-cache : simplify the interface (#13660) 2025-05-21 15:11:13 +03:00
simple-cmake-pkg
speculative
speculative-simple
sycl sycl : backend documentation review (#13544) 2025-05-19 14:38:20 +01:00
training llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py llama : fix FA when KV cache is not used (i.e. embeddings) (#12825) 2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh