..
batched
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched.swift
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
convert-llama2c-to-ggml
deprecation-warning
embedding
context : allow cache-less context for embeddings ( #13108 )
2025-05-08 14:28:33 +03:00
eval-callback
gen-docs
gguf
gguf-hash
gritlm
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
jeopardy
llama.android
cmake : enable curl by default ( #12761 )
2025-04-07 13:35:19 +02:00
llama.swiftui
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
lookahead
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
lookup
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
parallel
llama : refactor kv cache guard ( #12695 )
2025-04-02 14:32:59 +03:00
passkey
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
retrieval
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
save-load-state
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
simple
simple-chat
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
simple-cmake-pkg
speculative
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
speculative-simple
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
sycl
dsiable curl lib check, this action is missed by commit bd3f59f812 ( #12761 ) ( #12937 )
2025-04-14 18:19:07 +08:00
training
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
grammar : handle maxItems == 0 in JSON schema ( #13117 )
2025-04-26 10:10:20 +02:00
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
llama : move end-user examples to tools directory ( #13249 )
2025-05-02 20:27:13 +02:00
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
llama : fix FA when KV cache is not used (i.e. embeddings) ( #12825 )
2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh