llama_cpp_for_radxa_dragon_.../common
2025-02-12 10:06:53 -04:00
..
cmake
arg.cpp fix: typos in documentation files (#11791) 2025-02-10 23:21:31 +01:00
arg.h arg : option to exclude arguments from specific examples (#11136) 2025-01-08 12:55:36 +02:00
base64.hpp
build-info.cpp.in
chat-template.hpp sync: minja (a72057e519) (#11774) 2025-02-10 09:34:09 +00:00
chat.cpp sync: minja (#11641) 2025-02-05 01:00:12 +00:00
chat.hpp tool-call: support Command R7B (+ return tool_plan "thoughts" in API) (#11585) 2025-02-02 09:25:38 +00:00
CMakeLists.txt sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
common.cpp sync: minja (#11641) 2025-02-05 01:00:12 +00:00
common.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
console.cpp console : utf-8 fix for windows stdin (#9690) 2024-09-30 11:23:42 +03:00
console.h
json-schema-to-grammar.cpp sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
json-schema-to-grammar.h sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
json.hpp
llguidance.cpp llama : add llama_sampler_init for safe usage of llama_sampler_free (#11727) 2025-02-07 11:33:27 +02:00
log.cpp Name colors (#11573) 2025-02-02 15:14:48 +00:00
log.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
minja.hpp sync: minja (a72057e519) (#11774) 2025-02-10 09:34:09 +00:00
ngram-cache.cpp llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
ngram-cache.h llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
sampling.cpp sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
sampling.h sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
speculative.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
speculative.h fix: typos in documentation files (#11791) 2025-02-10 23:21:31 +01:00
stb_image.h