..
baby-llama
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
batched
llama : support Mamba Selective State Space Models ( #5328 )
2024-03-08 17:31:00 -05:00
batched-bench
llama : support Mamba Selective State Space Models ( #5328 )
2024-03-08 17:31:00 -05:00
batched.swift
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
beam-search
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
benchmark
ggml : remove old quantization functions ( #5942 )
2024-03-09 15:53:59 +02:00
convert-llama2c-to-ggml
ggml, common, examples, tests : fixed type arguments in printf ( #5528 )
2024-02-18 18:20:12 +02:00
embedding
server : normalize embeddings ( #5956 )
2024-03-09 14:27:58 +02:00
export-lora
ci : add an option to fail on compile warning ( #3952 )
2024-02-17 23:03:14 +02:00
finetune
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
gguf
imatrix
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
infill
convert : automatically fall back to HfVocab if tokenizer.model doesn't exist ( #5821 )
2024-03-02 12:27:26 -05:00
jeopardy
llama-bench
llama-bench : add embeddings option ( #5924 )
2024-03-07 16:32:38 +02:00
llama.android
ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility ( #5711 )
2024-02-25 20:43:00 +02:00
llama.swiftui
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
llava
ggml : remove old quantization functions ( #5942 )
2024-03-09 15:53:59 +02:00
lookahead
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
lookup
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
main
main : support special tokens as reverse/anti prompt ( #5847 )
2024-03-04 09:57:20 +02:00
main-cmake-pkg
parallel
llama : support Mamba Selective State Space Models ( #5328 )
2024-03-08 17:31:00 -05:00
passkey
llama : fix defrag bugs + add parameter ( #5735 )
2024-02-27 14:35:51 +02:00
perplexity
perplexity : support using multiple sequences to allow larger batch sizes ( #5946 )
2024-03-09 19:55:54 +01:00
quantize
IQ4_XS: a 4.25 bpw quantization ( #5747 )
2024-02-27 16:34:24 +02:00
quantize-stats
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
save-load-state
server
server : fix metrics init ( #5964 )
2024-03-09 17:34:15 +02:00
simple
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
speculative
fix speculative decoding build on windows ( #5874 )
2024-03-04 22:23:06 -05:00
sycl
Support multiple GPUs (split mode) on SYCL backend ( #5806 )
2024-03-02 19:49:30 +08:00
tokenize
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
train-text-from-scratch
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
gguf : add python reader example ( #5216 )
2024-02-13 19:56:38 +02:00
gpt4all.sh
json-schema-to-grammar.py
examples : support minItems/maxItems in JSON grammar converter ( #5039 )
2024-02-19 16:14:07 +02:00
llama.vim
llama.vim : added api key support ( #5090 )
2024-01-23 08:51:27 +02:00
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
pydantic_models_to_grammar.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
reason-act.sh
server-embd.py
server : refactor ( #5882 )
2024-03-07 11:41:53 +02:00
server-llama2-13B.sh