llama_cpp_for_radxa_dragon_.../ggml
Rithik Sharma 434b2a1ff6
ggml-webgpu: add Q1_0 support (#22374)
* add fast matmul matvec q1_0 kernel

* ggml-webgpu: drop redundant zero-fills in Q1_0 shmem init
2026-04-27 15:50:59 -07:00
..
cmake ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
include CUDA: manage NCCL communicators in context (#21891) 2026-04-15 15:58:40 +02:00
src ggml-webgpu: add Q1_0 support (#22374) 2026-04-27 15:50:59 -07:00
.gitignore
CMakeLists.txt HIP: flip GGML_HIP_GRAPHS to default on (#22254) 2026-04-23 02:34:31 +02:00