llama_cpp_for_radxa_dragon_.../src
Johannes Gäßler d6f3030047
ggml: backend-agnostic tensor parallelism (experimental) (#19378)
* ggml: backend-agnostic tensor parallelism

* support for GPT-OSS, Qwen 3 MoE

* partial Vulkan fix

* add support for 4/8 GPUs

* unconditional peer access

* re-use buffers + ggml contexts

* fix output pattern

* NCCL support

* GGML: HIP: add RCCL support

* Remove shfl and AllReduce from backend interface

* move allocation workaround out of ggml-alloc.c

* 2d tensor set/get support

* Fix the seg fault without NCCL

* Apply suggestion from JohannesGaessler

* support for tensor dims % n_devs != 0

* fix view_offs scaling

* arbitrary num. of GPUs/tensor split

* fix compilation

* better granularity estimate

* Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.

Fix compilation errors.

* partial Qwen 3 Next support

* Fix qwen3 30b (#8)

* Fix crash with Qwen-30B-A3B Q4_0

Qwen-30B-A3B Q4_0 has an intermediate dimension of 768. Using a granularity of 256 forces an uneven split between GPUs, which is not supported by the current implementation.

* Decide block size based on tensor quantization type

* Fix crashes due to KV cache serialization (#9)

KV cache serialization requires non-zero offsets on the tensor. Add support in the meta backend to set/get a tensor with a non-zero offset.

* metal : fix build (#7)

* static memory allocations, fix usage count

* fix tensor granularity

* more even memory distribution

* use BF16 for allreduce

* rebase fixup

* better error message for unsupported architectures

* Fix device mismatch during scatter of allReduce. (#11)

There is a mismatch between the dst buffer device and the backend device, causing the use of sync copies

* Enable the previous allreduce implementation. It is better in both perf and stability (#12)

* delay AllReduce for Moe for less I/O

* build : clean-up compile warnings

* backend : move most of the meta backend API to ggml-backend-impl.h

* cont : hide unused public API in the implementation

* llama : use llama_device + remove ggml_backend_dev_is_meta()

* ggml-backend : remove unused alloc include

* minor : remove regex include

* ggml : introduce ggml-ext.h for staging new APIs

* rebase fixup

* fix tests

* llama : more robust logic for determining Meta devices (#16)

* llama : more robust logic for determining Meta devices

* cont : fix devs size check

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* cont : fix log type

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* disable roundtrip for meta backend

* fix arch selection

* Qwen 3.5 support

* fix Gemma 4 MoE

* fix OpenVino, SYCL

* fix test-llama-archs for CPU-only builds

* Fix Qwen 3.5 MoE

* disable meta backend tests for WebGPU

* tests : filter CPU-based devices from the Meta backend tests (#17)

* meta : formatting, naming, indentation (#18)

* formatting : llama-model.cpp

* formatting : ggml-ext.h

* formatting : ggml-backend-meta.cpp

* meta : add TODO

* add documentation

* better error messages

* fix GPT-OSS

---------

Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-04-09 16:42:19 +02:00
..
models ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
CMakeLists.txt model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-adapter.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-adapter.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama-arch.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-arch.h ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-chat.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-chat.h model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-context.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-context.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-cparams.cpp
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h tests : add unit test coverage for llama_tensor_get_type (#20112) 2026-04-02 22:53:58 +02:00
llama-grammar.cpp common/grammar: fix grammar parsing issues to prevent stack overflow and hangs (#18604) 2026-03-21 18:43:35 +01:00
llama-grammar.h
llama-graph.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-graph.h kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-impl.cpp llama : correct platform-independent loading of BOOL metadata (#21428) 2026-04-06 01:40:38 +02:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp
llama-io.h
llama-kv-cache-iswa.cpp (revert) kv-cache : do not quantize SWA KV cache (#21332) 2026-04-03 09:07:01 +03:00
llama-kv-cache-iswa.h
llama-kv-cache.cpp kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cache.h kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cells.h
llama-memory-hybrid-iswa.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid-iswa.h
llama-memory-hybrid.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid.h
llama-memory-recurrent.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-memory-recurrent.h
llama-memory.cpp
llama-memory.h
llama-mmap.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-mmap.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-loader.cpp ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
llama-model-loader.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-model.h ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-quant.cpp ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
llama-quant.h
llama-sampler.cpp
llama-sampler.h
llama-vocab.cpp vocab: add gemma4 tokenizer tests, fix edge case (#21534) 2026-04-09 11:41:14 +02:00
llama-vocab.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00
llama.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
unicode-data.cpp
unicode-data.h
unicode.cpp unicode : add custom Qwen2 regex handler to fix segfault on long input (#21257) 2026-04-07 16:13:38 +03:00
unicode.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00