| .. |
|
batched
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
batched-bench
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
batched.swift
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
convert-llama2c-to-ggml
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
cvector-generator
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
deprecation-warning
|
Update deprecation-warning.cpp (#10619)
|
2024-12-04 23:19:20 +01:00 |
|
embedding
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
eval-callback
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
export-lora
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
gbnf-validator
|
llama : minor grammar refactor (#10897)
|
2024-12-19 17:42:13 +02:00 |
|
gen-docs
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
gguf
|
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
2025-01-07 18:01:58 +01:00 |
|
gguf-hash
|
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
2025-01-07 18:01:58 +01:00 |
|
gguf-split
|
ci : use -no-cnv in gguf-split tests (#11254)
|
2025-01-15 18:28:35 +02:00 |
|
gritlm
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
imatrix
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
infill
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
jeopardy
|
|
|
|
llama-bench
|
rpc : early register backend devices (#11262)
|
2025-01-17 10:57:09 +02:00 |
|
llama.android
|
llama.android: add field formatChat to control whether to parse special tokens when send message (#11270)
|
2025-01-17 14:57:56 +02:00 |
|
llama.swiftui
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
llava
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
lookahead
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
lookup
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
main
|
cli : auto activate conversation mode if chat template is available (#11214)
|
2025-01-13 20:18:12 +01:00 |
|
main-cmake-pkg
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
parallel
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
passkey
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
perplexity
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
quantize
|
ci : use -no-cnv in gguf-split tests (#11254)
|
2025-01-15 18:28:35 +02:00 |
|
quantize-stats
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
retrieval
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
rpc
|
rpc-server : add support for the SYCL backend (#10934)
|
2024-12-23 10:39:30 +02:00 |
|
run
|
Adding linenoise.cpp to llama-run (#11252)
|
2025-01-18 14:42:31 +00:00 |
|
save-load-state
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
server
|
tests : increase timeout when sanitizers are enabled (#11300)
|
2025-01-19 20:22:30 +02:00 |
|
simple
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
simple-chat
|
simple-chat : fix BOS being added to each message (#11278)
|
2025-01-19 18:12:09 +02:00 |
|
speculative
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
speculative-simple
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
sycl
|
|
|
|
tokenize
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
tts
|
tts : add guide tokens support (#11186)
|
2025-01-18 12:20:57 +02:00 |
|
chat-13B.bat
|
|
|
|
chat-13B.sh
|
|
|
|
chat-persistent.sh
|
|
|
|
chat-vicuna.sh
|
|
|
|
chat.sh
|
|
|
|
CMakeLists.txt
|
tts : add OuteTTS support (#10784)
|
2024-12-18 19:27:21 +02:00 |
|
convert_legacy_llama.py
|
|
|
|
json_schema_pydantic_example.py
|
|
|
|
json_schema_to_grammar.py
|
|
|
|
llama.vim
|
|
|
|
llm.vim
|
|
|
|
Miku.sh
|
|
|
|
pydantic_models_to_grammar.py
|
|
|
|
pydantic_models_to_grammar_examples.py
|
|
|
|
reason-act.sh
|
|
|
|
regex_to_grammar.py
|
|
|
|
server-llama2-13B.sh
|
|
|
|
server_embd.py
|
|
|
|
ts-type-to-grammar.sh
|
|
|