llama_cpp_for_radxa_dragon_.../common
Xuan-Son Nguyen bd3f59f812
cmake : enable curl by default (#12761)
* cmake : enable curl by default

* no curl if no examples

* fix build

* fix build-linux-cross

* add windows-setup-curl

* fix

* shell

* fix path

* fix windows-latest-cmake*

* run: include_directories

* LLAMA_RUN_EXTRA_LIBS

* sycl: no llama_curl

* no test-arg-parser on windows

* clarification

* try riscv64 / arm64

* windows: include libcurl inside release binary

* add msg

* fix mac / ios / android build

* will this fix xcode?

* try clearing the cache

* add bunch of licenses

* revert clear cache

* fix xcode

* fix xcode (2)

* fix typo
2025-04-07 13:35:19 +02:00
..
cmake
minja sync: minja (#12739) 2025-04-04 21:16:39 +01:00
arg.cpp common : fix includes in arg.cpp and gemma3-cli.cpp (#12766) 2025-04-05 17:46:00 +02:00
arg.h
base64.hpp
build-info.cpp.in
chat.cpp server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
chat.h server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
CMakeLists.txt cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
common.cpp llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
common.h llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
console.cpp
console.h
json-schema-to-grammar.cpp tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json-schema-to-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json.hpp
llguidance.cpp upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
log.cpp Fix: Compile failure due to Microsoft STL breaking change (#11836) 2025-02-12 21:36:11 +01:00
log.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
ngram-cache.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ngram-cache.h
sampling.cpp llama: fix error on bad grammar (#12628) 2025-03-28 18:08:52 +01:00
sampling.h
speculative.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
speculative.h speculative : update default params (#11954) 2025-02-19 13:29:42 +02:00
stb_image.h