..
batched
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched-bench
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched.swift
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
convert-llama2c-to-ggml
cvector-generator
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
deprecation-warning
embedding
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
eval-callback
export-lora
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
gen-docs
gguf
gguf-hash
gguf-split
gguf-split : --merge now respects --dry-run option ( #12681 )
2025-04-04 16:09:12 +02:00
gritlm
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
imatrix
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
infill
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
jeopardy
llama-bench
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama.android
cmake : enable curl by default ( #12761 )
2025-04-07 13:35:19 +02:00
llama.swiftui
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llava
arg : add --no-mmproj-offload ( #13093 )
2025-04-24 14:04:14 +02:00
lookahead
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
lookup
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
main
main : Fix Ctrl+D/newline handling ( #12951 )
2025-04-18 22:02:55 +02:00
parallel
llama : refactor kv cache guard ( #12695 )
2025-04-02 14:32:59 +03:00
passkey
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
perplexity
hellaswag: display estimated score confidence interval ( #12797 )
2025-04-07 18:47:08 +03:00
quantize
quantize: Handle user-defined quantization levels for additional tensors ( #12511 )
2025-04-13 21:29:28 +03:00
retrieval
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
rpc
rpc : add command line option for number of threads for the CPU backend ( #13060 )
2025-04-23 10:32:49 +03:00
run
contrib: support modelscope community ( #12664 )
2025-04-11 14:01:56 +02:00
save-load-state
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
server
server : use std::move whenever possible ( #12936 )
2025-04-18 19:58:12 +02:00
simple
simple-chat
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
simple-cmake-pkg
speculative
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
speculative-simple
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
sycl
dsiable curl lib check, this action is missed by commit bd3f59f812 ( #12761 ) ( #12937 )
2025-04-14 18:19:07 +08:00
tokenize
tts
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
llama : fix FA when KV cache is not used (i.e. embeddings) ( #12825 )
2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh