llama_cpp_for_radxa_dragon_.../examples
Neo Zhang Jianyu 715641391d
Support multiple GPUs (split mode) on SYCL backend (#5806)
* suport multiple cards: split-mode - layer|row

* rm warning

* rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test

* update news

* fix merge error

* update according to review comments
2024-03-02 19:49:30 +08:00
..
baby-llama code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
batched ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
batched-bench llama : cleanup unused mmq flags (#5772) 2024-03-01 13:39:06 +02:00
batched.swift ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
beam-search ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
benchmark
convert-llama2c-to-ggml ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
embedding ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
export-lora ci : add an option to fail on compile warning (#3952) 2024-02-17 23:03:14 +02:00
finetune code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
gguf
imatrix ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
infill llama : refactor k-shift implementation + KV defragmentation (#5691) 2024-02-25 22:12:24 +02:00
jeopardy
llama-bench Support multiple GPUs (split mode) on SYCL backend (#5806) 2024-03-02 19:49:30 +08:00
llama.android ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711) 2024-02-25 20:43:00 +02:00
llama.swiftui ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
llava code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
lookahead ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
lookup ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
main llama : refactor k-shift implementation + KV defragmentation (#5691) 2024-02-25 22:12:24 +02:00
main-cmake-pkg
parallel ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
passkey llama : fix defrag bugs + add parameter (#5735) 2024-02-27 14:35:51 +02:00
perplexity ci : fix wikitext url + compile warnings (#5569) 2024-02-18 22:39:30 +02:00
quantize IQ4_XS: a 4.25 bpw quantization (#5747) 2024-02-27 16:34:24 +02:00
quantize-stats refactor : switch to emplace_back to avoid extra object (#5291) 2024-02-03 13:23:37 +02:00
save-load-state
server server : remove api_like_OAI.py proxy script (#5808) 2024-03-01 20:00:58 +02:00
simple ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
speculative ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
sycl Support multiple GPUs (split mode) on SYCL backend (#5806) 2024-03-02 19:49:30 +08:00
tokenize ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
train-text-from-scratch code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt gguf : add python reader example (#5216) 2024-02-13 19:56:38 +02:00
gpt4all.sh
json-schema-to-grammar.py examples : support minItems/maxItems in JSON grammar converter (#5039) 2024-02-19 16:14:07 +02:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
pydantic_models_to_grammar.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
reason-act.sh
server-llama2-13B.sh