llama_cpp_for_radxa_dragon_.../ggml
2026-04-30 13:04:50 +02:00
..
cmake ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
include CUDA: manage NCCL communicators in context (#21891) 2026-04-15 15:58:40 +02:00
src CUDA: fix tile FA kernel on Pascal (#22541) 2026-04-30 13:04:50 +02:00
.gitignore
CMakeLists.txt ggml : bump version to 0.10.1 (ggml/1469) 2026-04-29 16:43:47 +03:00