..
batched
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched-bench
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched.swift
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
convert-llama2c-to-ggml
cvector-generator
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
deprecation-warning
embedding
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
eval-callback
export-lora
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
gbnf-validator
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars ( #9639 )
2025-01-30 19:13:58 +00:00
gen-docs
gguf
gguf-hash
gguf-split
gguf-split : --merge now respects --dry-run option ( #12681 )
2025-04-04 16:09:12 +02:00
gritlm
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
imatrix
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
infill
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
jeopardy
llama-bench
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama.android
cmake : enable curl by default ( #12761 )
2025-04-07 13:35:19 +02:00
llama.swiftui
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llava
Add performance print for gemma3 in example ( #12929 )
2025-04-14 19:18:20 +02:00
lookahead
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
lookup
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
main
docs : bring llama-cli conversation/template docs up-to-date ( #12426 )
2025-03-17 21:14:32 +01:00
parallel
llama : refactor kv cache guard ( #12695 )
2025-04-02 14:32:59 +03:00
passkey
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
perplexity
hellaswag: display estimated score confidence interval ( #12797 )
2025-04-07 18:47:08 +03:00
quantize
quantize: Handle user-defined quantization levels for additional tensors ( #12511 )
2025-04-13 21:29:28 +03:00
quantize-stats
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
retrieval
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
rpc
rpc : add RPC_CMD_HELLO ( #12955 )
2025-04-18 10:13:42 +03:00
run
contrib: support modelscope community ( #12664 )
2025-04-11 14:01:56 +02:00
save-load-state
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
server
server : add VSCode's Github Copilot Chat support ( #12896 )
2025-04-11 23:37:41 +03:00
simple
simple-chat
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
simple-cmake-pkg
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
speculative
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
speculative-simple
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
sycl
dsiable curl lib check, this action is missed by commit bd3f59f812 ( #12761 ) ( #12937 )
2025-04-14 18:19:07 +08:00
tokenize
tts
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
llama.vim
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
llama : fix FA when KV cache is not used (i.e. embeddings) ( #12825 )
2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh