This website requires JavaScript.
Explore
Help
Sign In
pingu_98
/
llama_cpp_for_radxa_dragon_wing_q6a
Watch
1
Star
0
Fork
You've already forked llama_cpp_for_radxa_dragon_wing_q6a
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
3cd3a39532
llama_cpp_for_radxa_dragon_...
/
ggml
History
lhez
2b65ae3029
opencl: simplify kernel embedding logic in cmakefile (
#12503
)
...
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
2025-03-24 09:20:47 -07:00
..
cmake
cmake : enable building llama.cpp using system libggml (
#12321
)
2025-03-17 11:05:23 +02:00
include
llama: Add support for RWKV v7 architecture (
#12412
)
2025-03-18 07:27:50 +08:00
src
opencl: simplify kernel embedding logic in cmakefile (
#12503
)
2025-03-24 09:20:47 -07:00
.gitignore
CMakeLists.txt
SYCL: using graphs is configurable by environment variable and compile option (
#12371
)
2025-03-18 11:16:31 +01:00