..
baby-llama
batched
metal : pad n_ctx by 32 ( #6177 )
2024-03-22 09:36:03 +02:00
batched-bench
bench : make n_batch and n_ubatch configurable in Batched bench ( #6500 )
2024-04-05 21:34:53 +03:00
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml
llama2c : open file as binary ( #6332 )
2024-03-27 09:16:02 +02:00
embedding
BERT tokenizer fixes ( #6498 )
2024-04-09 13:44:08 -04:00
eval-callback
model: support arch DbrxForCausalLM ( #6515 )
2024-04-13 11:33:52 +02:00
export-lora
finetune
gbnf-validator
grammars: 1.5x faster inference w/ complex grammars (vector reserves / reuses) ( #6609 )
2024-04-11 19:47:34 +01:00
gguf
gguf : add option to not check tensor data ( #6582 )
2024-04-10 21:16:48 +03:00
gguf-split
Fix --split-max-size ( #6655 )
2024-04-14 13:12:59 +02:00
gritlm
gritlm : add initial README.md ( #6086 )
2024-03-16 17:46:29 +02:00
imatrix
imatrix : remove invalid assert ( #6632 )
2024-04-12 11:49:58 +03:00
infill
infill : add download instructions for model ( #6626 )
2024-04-12 15:11:46 +03:00
jeopardy
llama-bench
cuda : rename build flag to LLAMA_CUDA ( #6299 )
2024-03-26 01:16:01 +01:00
llama.android
llama.swiftui
llama : add pipeline parallelism support ( #6017 )
2024-03-13 18:54:21 +01:00
llava
chore: Fix markdown warnings ( #6625 )
2024-04-12 10:52:36 +02:00
lookahead
BERT tokenizer fixes ( #6498 )
2024-04-09 13:44:08 -04:00
lookup
BERT tokenizer fixes ( #6498 )
2024-04-09 13:44:08 -04:00
main
chore: Fix markdown warnings ( #6625 )
2024-04-12 10:52:36 +02:00
main-cmake-pkg
cuda : rename build flag to LLAMA_CUDA ( #6299 )
2024-03-26 01:16:01 +01:00
parallel
llama : greatly reduce output buffer memory usage ( #6122 )
2024-03-26 16:46:41 +02:00
passkey
perplexity
chore: Fix markdown warnings ( #6625 )
2024-04-12 10:52:36 +02:00
quantize
chore: Fix markdown warnings ( #6625 )
2024-04-12 10:52:36 +02:00
quantize-stats
retrieval
examples : add "retrieval" ( #6193 )
2024-03-25 09:38:22 +02:00
save-load-state
llama : save and restore kv cache for single seq id ( #6341 )
2024-04-08 15:43:30 +03:00
server
JSON schema conversion: ⚡ ️ faster repetitions, min/maxLength for strings, cap number length ( #6555 )
2024-04-12 19:43:38 +01:00
simple
speculative
BERT tokenizer fixes ( #6498 )
2024-04-09 13:44:08 -04:00
sycl
fix memcpy() crash, add missed cmd in guide, fix softmax ( #6622 )
2024-04-14 10:42:29 +08:00
tokenize
BERT tokenizer fixes ( #6498 )
2024-04-09 13:44:08 -04:00
train-text-from-scratch
gguf : fix resource leaks ( #6061 )
2024-03-14 20:29:32 +02:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
eval-callback: Example how to use eval callback for debugging ( #6576 )
2024-04-11 14:51:07 +02:00
gpt4all.sh
json-schema-pydantic-example.py
json-schema-to-grammar improvements (+ added to server) ( #5978 )
2024-03-21 11:50:43 +00:00
json_schema_to_grammar.py
JSON schema conversion: ⚡ ️ faster repetitions, min/maxLength for strings, cap number length ( #6555 )
2024-04-12 19:43:38 +01:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
regex-to-grammar.py
JSON schema conversion: ⚡ ️ faster repetitions, min/maxLength for strings, cap number length ( #6555 )
2024-04-12 19:43:38 +01:00
server-embd.py
server-llama2-13B.sh
ts-type-to-grammar.sh
JSON schema conversion: ⚡ ️ faster repetitions, min/maxLength for strings, cap number length ( #6555 )
2024-04-12 19:43:38 +01:00