llama_cpp_for_radxa_dragon_.../examples
2024-04-12 10:52:36 +02:00
..
baby-llama
batched metal : pad n_ctx by 32 (#6177) 2024-03-22 09:36:03 +02:00
batched-bench bench : make n_batch and n_ubatch configurable in Batched bench (#6500) 2024-04-05 21:34:53 +03:00
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml llama2c : open file as binary (#6332) 2024-03-27 09:16:02 +02:00
embedding BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
eval-callback eval-callback: use ggml_op_desc to pretty print unary operator name (#6631) 2024-04-12 10:26:47 +02:00
export-lora
finetune
gbnf-validator grammars: 1.5x faster inference w/ complex grammars (vector reserves / reuses) (#6609) 2024-04-11 19:47:34 +01:00
gguf gguf : add option to not check tensor data (#6582) 2024-04-10 21:16:48 +03:00
gguf-split split: allow --split-max-size option (#6343) 2024-03-29 22:34:44 +01:00
gritlm
imatrix imatrix : remove invalid assert (#6632) 2024-04-12 11:49:58 +03:00
infill BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
jeopardy
llama-bench cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
llama.android
llama.swiftui
llava chore: Fix markdown warnings (#6625) 2024-04-12 10:52:36 +02:00
lookahead BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
lookup BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
main chore: Fix markdown warnings (#6625) 2024-04-12 10:52:36 +02:00
main-cmake-pkg cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
parallel llama : greatly reduce output buffer memory usage (#6122) 2024-03-26 16:46:41 +02:00
passkey
perplexity chore: Fix markdown warnings (#6625) 2024-04-12 10:52:36 +02:00
quantize chore: Fix markdown warnings (#6625) 2024-04-12 10:52:36 +02:00
quantize-stats
retrieval examples : add "retrieval" (#6193) 2024-03-25 09:38:22 +02:00
save-load-state llama : save and restore kv cache for single seq id (#6341) 2024-04-08 15:43:30 +03:00
server minor layout improvements (#6572) 2024-04-10 19:18:25 +02:00
simple
speculative BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
sycl [SYCL] fix SYCL backend build on windows is break by LOG() error (#6290) 2024-03-25 15:52:41 +08:00
tokenize BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
train-text-from-scratch
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt eval-callback: Example how to use eval callback for debugging (#6576) 2024-04-11 14:51:07 +02:00
gpt4all.sh
json-schema-pydantic-example.py
json-schema-to-grammar.py json-schema-to-grammar : fix order of props + non-str const/enum (#6232) 2024-03-22 15:07:44 +02:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
regex-to-grammar.py
server-embd.py
server-llama2-13B.sh
ts-type-to-grammar.sh