| .. |
|
baby-llama
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
batched
|
|
|
|
batched-bench
|
batched-bench : add --output-format jsonl option (#9293)
|
2024-09-06 17:59:58 +02:00 |
|
batched.swift
|
|
|
|
benchmark
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
convert-llama2c-to-ggml
|
|
|
|
cvector-generator
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
deprecation-warning
|
examples : remove finetune and train-text-from-scratch (#8669)
|
2024-07-25 10:39:04 +02:00 |
|
embedding
|
Add support for encoder-only T5 models (#8900)
|
2024-08-10 11:43:26 +02:00 |
|
eval-callback
|
common : remove duplicate function llama_should_add_bos_token (#8778)
|
2024-08-15 10:23:23 +03:00 |
|
export-lora
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
gbnf-validator
|
llama : move vocab, grammar and sampling into separate files (#8508)
|
2024-07-23 13:10:17 +03:00 |
|
gguf
|
|
|
|
gguf-hash
|
|
|
|
gguf-split
|
|
|
|
gritlm
|
|
|
|
imatrix
|
common : remove duplicate function llama_should_add_bos_token (#8778)
|
2024-08-15 10:23:23 +03:00 |
|
infill
|
common : remove duplicate function llama_should_add_bos_token (#8778)
|
2024-08-15 10:23:23 +03:00 |
|
jeopardy
|
|
|
|
llama-bench
|
llama-bench : log benchmark progress (#9287)
|
2024-09-06 23:03:01 +02:00 |
|
llama.android
|
examples: fix android example cannot be generated continuously (#8621)
|
2024-07-22 09:54:42 +03:00 |
|
llama.swiftui
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
llava
|
llava : the function "clip" should be int (#9237)
|
2024-08-30 07:21:57 +02:00 |
|
lookahead
|
common : Changed tuple to struct (TODO fix) (#8823)
|
2024-08-05 18:14:10 +02:00 |
|
lookup
|
common : Changed tuple to struct (TODO fix) (#8823)
|
2024-08-05 18:14:10 +02:00 |
|
main
|
llama-cli : remove duplicated log message (#9275)
|
2024-09-02 15:36:43 +03:00 |
|
main-cmake-pkg
|
|
|
|
parallel
|
common : Changed tuple to struct (TODO fix) (#8823)
|
2024-08-05 18:14:10 +02:00 |
|
passkey
|
|
|
|
perplexity
|
common : remove duplicate function llama_should_add_bos_token (#8778)
|
2024-08-15 10:23:23 +03:00 |
|
quantize
|
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
|
2024-09-05 21:48:47 -04:00 |
|
quantize-stats
|
|
|
|
retrieval
|
retrieval : fix memory leak in retrieval query handling (#8955)
|
2024-08-15 10:40:12 +03:00 |
|
rpc
|
Merge commit from fork
|
2024-08-09 23:03:21 +03:00 |
|
save-load-state
|
common : Changed tuple to struct (TODO fix) (#8823)
|
2024-08-05 18:14:10 +02:00 |
|
server
|
server : fix missing lock (#9334)
|
2024-09-06 14:06:04 +02:00 |
|
simple
|
simple : update name of executable to llama-simple (#8885)
|
2024-08-06 16:44:35 +02:00 |
|
speculative
|
Threadpool: take 2 (#8672)
|
2024-08-30 01:20:53 +02:00 |
|
sycl
|
[SYCL] Updated SYCL device filtering (#8901)
|
2024-08-07 11:25:36 +01:00 |
|
tokenize
|
common : remove duplicate function llama_should_add_bos_token (#8778)
|
2024-08-15 10:23:23 +03:00 |
|
base-translate.sh
|
|
|
|
chat-13B.bat
|
|
|
|
chat-13B.sh
|
|
|
|
chat-persistent.sh
|
|
|
|
chat-vicuna.sh
|
|
|
|
chat.sh
|
|
|
|
CMakeLists.txt
|
examples : remove finetune and train-text-from-scratch (#8669)
|
2024-07-25 10:39:04 +02:00 |
|
convert_legacy_llama.py
|
|
|
|
json_schema_pydantic_example.py
|
|
|
|
json_schema_to_grammar.py
|
|
|
|
llama.vim
|
|
|
|
llm.vim
|
|
|
|
Miku.sh
|
|
|
|
pydantic_models_to_grammar.py
|
|
|
|
pydantic_models_to_grammar_examples.py
|
|
|
|
reason-act.sh
|
|
|
|
regex_to_grammar.py
|
|
|
|
server-llama2-13B.sh
|
|
|
|
server_embd.py
|
|
|
|
ts-type-to-grammar.sh
|
|
|