This website requires JavaScript.
Explore
Help
Sign In
pingu_98
/
llama_cpp_for_radxa_dragon_wing_q6a
Watch
1
Star
0
Fork
You've already forked llama_cpp_for_radxa_dragon_wing_q6a
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
ce18efeaf1
llama_cpp_for_radxa_dragon_...
/
ggml
History
Max Krasnyansky
517b7170e1
cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (
#16833
)
...
Very similar implementation to the flash-attention chunking, with similar benefits.
2025-10-30 09:06:13 -07:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
model: add support for qwen3vl series (
#16780
)
2025-10-30 16:19:14 +01:00
src
cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (
#16833
)
2025-10-30 09:06:13 -07:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
Add experimental ggml-hexagon backend for the Hexagon NPU (
#16547
)
2025-10-22 13:47:09 -07:00