..
batched
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
batched-bench
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
batched.swift
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
convert-llama2c-to-ggml
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
cvector-generator
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
deprecation-warning
embedding
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
eval-callback
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
export-lora
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
gbnf-validator
gen-docs
gguf
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-hash
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-split
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gritlm
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
imatrix
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
infill
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
jeopardy
llama-bench
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llama.android
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llama.swiftui
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llava
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
lookahead
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
lookup
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
main
cli : auto activate conversation mode if chat template is available ( #11214 )
2025-01-13 20:18:12 +01:00
main-cmake-pkg
parallel
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
passkey
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
perplexity
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
quantize
quantize-stats
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
retrieval
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
rpc
run
Reset color before we exit ( #11205 )
2025-01-12 18:23:10 +00:00
save-load-state
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
server
server : Improve code snippets direction between RTL text ( #11221 )
2025-01-14 11:39:33 +01:00
simple
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
simple-chat
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
speculative
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
speculative-simple
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
sycl
tokenize
llama : add llama_vocab, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
tts
examples : add embd_to_audio to tts-outetts.py [no ci] ( #11235 )
2025-01-15 05:44:38 +01:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh