llama_cpp_for_radxa_dragon_.../examples
slaren 2bf8d0f7c4
backend : offload large batches to GPU (#6083)
* backend : offload large batches to GPU

* fix hip

* code cleanup

* fix CUDA split buffers

* Update ggml-backend-impl.h

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* cuda : fix memset without set_device

* imatrix : remove sched affix from weight names

* sched : add a new split if the current one has too many inputs
reduce max inputs per split
more cleanup

* update backends

ggml-ci

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-03-18 11:03:04 +01:00
..
baby-llama code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
batched llama : more consistent names of count variables (#5994) 2024-03-11 17:49:47 +02:00
batched-bench llama : add pipeline parallelism support (#6017) 2024-03-13 18:54:21 +01:00
batched.swift ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
beam-search ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
benchmark ggml : remove old quantization functions (#5942) 2024-03-09 15:53:59 +02:00
convert-llama2c-to-ggml ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
embedding embedding : add EOS token if not present (#899) 2024-03-14 15:14:14 +02:00
export-lora ci : add an option to fail on compile warning (#3952) 2024-02-17 23:03:14 +02:00
finetune code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
gguf gguf : fix resource leaks (#6061) 2024-03-14 20:29:32 +02:00
gritlm gritlm : add initial README.md (#6086) 2024-03-16 17:46:29 +02:00
imatrix backend : offload large batches to GPU (#6083) 2024-03-18 11:03:04 +01:00
infill convert : automatically fall back to HfVocab if tokenizer.model doesn't exist (#5821) 2024-03-02 12:27:26 -05:00
jeopardy
llama-bench backend : offload large batches to GPU (#6083) 2024-03-18 11:03:04 +01:00
llama.android android : fix utf8 decoding error (#5935) 2024-03-10 22:03:17 +02:00
llama.swiftui llama : add pipeline parallelism support (#6017) 2024-03-13 18:54:21 +01:00
llava llava : change API to pure C style for Rust FFI bindgen (#6079) 2024-03-15 16:31:05 +02:00
lookahead ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
lookup ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
main common: llama_load_model_from_url using --model-url (#6098) 2024-03-17 19:12:37 +01:00
main-cmake-pkg
parallel llama : support Mamba Selective State Space Models (#5328) 2024-03-08 17:31:00 -05:00
passkey llama : fix defrag bugs + add parameter (#5735) 2024-02-27 14:35:51 +02:00
perplexity llama : add pipeline parallelism support (#6017) 2024-03-13 18:54:21 +01:00
quantize IQ4_XS: a 4.25 bpw quantization (#5747) 2024-02-27 16:34:24 +02:00
quantize-stats
save-load-state
server common: llama_load_model_from_url using --model-url (#6098) 2024-03-17 19:12:37 +01:00
simple ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
speculative fix speculative decoding build on windows (#5874) 2024-03-04 22:23:06 -05:00
sycl fix set main gpu error (#6073) 2024-03-15 18:53:53 +08:00
tokenize ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
train-text-from-scratch gguf : fix resource leaks (#6061) 2024-03-14 20:29:32 +02:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt llama : add support for GritLM (#5959) 2024-03-10 17:56:30 +02:00
gpt4all.sh
json-schema-to-grammar.py examples : support minItems/maxItems in JSON grammar converter (#5039) 2024-02-19 16:14:07 +02:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
server-embd.py server : refactor (#5882) 2024-03-07 11:41:53 +02:00
server-llama2-13B.sh