llama_cpp_for_radxa_dragon_.../common
2025-09-19 13:02:51 +07:00
..
arg.cpp Add resumable downloads for llama-server model loading (#15963) 2025-09-18 16:22:50 +01:00
arg.h
base64.hpp
build-info.cpp.in
chat-parser.cpp chat : support Granite model reasoning and tool call (#14864) 2025-08-06 20:27:30 +02:00
chat-parser.h
chat.cpp chat : fix build on arm64 (#16101) 2025-09-19 13:02:51 +07:00
chat.h chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533) 2025-09-08 16:59:48 +02:00
CMakeLists.txt cmake : do not search for curl libraries by ourselves (#14613) 2025-07-10 15:29:05 +03:00
common.cpp llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
common.h llama-bench: add --n-cpu-moe support (#15952) 2025-09-16 16:17:08 +02:00
console.cpp
console.h
json-partial.cpp
json-partial.h
json-schema-to-grammar.cpp common : Fix corrupted memory error on json grammar initialization (#16038) 2025-09-17 11:08:02 +03:00
json-schema-to-grammar.h
llguidance.cpp
log.cpp Implement --log-colors with always/never/auto (#15792) 2025-09-05 19:43:59 +01:00
log.h Implement --log-colors with always/never/auto (#15792) 2025-09-05 19:43:59 +01:00
ngram-cache.cpp
ngram-cache.h
regex-partial.cpp
regex-partial.h
sampling.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
sampling.h sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.h server : implement universal assisted decoding (#12635) 2025-07-31 14:25:23 +02:00