llama_cpp_for_radxa_dragon_.../common
2025-03-26 11:06:09 -07:00
..
cmake
minja sync: minja - support QwQ-32B (#12235) 2025-03-07 09:33:37 +00:00
arg.cpp llama-tts : add '-o' option (#12398) 2025-03-15 17:23:11 +01:00
arg.h
base64.hpp
build-info.cpp.in
chat.cpp server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
chat.h server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
CMakeLists.txt upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
common.cpp Load all MoE experts during warmup (#11571) 2025-03-14 13:47:05 +01:00
common.h common : refactor '-o' option (#12278) 2025-03-10 13:34:13 +02:00
console.cpp
console.h
json-schema-to-grammar.cpp tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json-schema-to-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json.hpp
llguidance.cpp upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
log.cpp Fix: Compile failure due to Microsoft STL breaking change (#11836) 2025-02-12 21:36:11 +01:00
log.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
ngram-cache.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ngram-cache.h
sampling.cpp tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
sampling.h
speculative.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
speculative.h speculative : update default params (#11954) 2025-02-19 13:29:42 +02:00
stb_image.h