llama_cpp_for_radxa_dragon_.../src
momonga 9c675c7140
model : Plamo3 support (#17304)
* plamo3

* fix plamo3

* clean code

* clean up the code

* fix diff

* clean up the code

* clean up the code

* clean up the code

* clean up the code

* clean up the code

* clean up the code

* add chat_template if exist

* clean up the code

* fix cpu-backend

* chore: whitespace trim fix + typo fix

* Fix: address review feedback

* restore `FREQ_BASE_SWA` constant

* Fix: address review feedback2

* Fix:typecheck

* Fix: address review feedback3

* final cleanup

---------

Co-authored-by: mmngays <146910567+mmngays@users.noreply.github.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-28 17:28:31 +01:00
..
models model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
CMakeLists.txt model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
llama-adapter.cpp
llama-adapter.h
llama-arch.cpp model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
llama-arch.h model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
llama-batch.cpp batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-batch.h batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-chat.cpp model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
llama-chat.h model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
llama-context.cpp llama: fix magic number of 999 for GPU layers (#18266) 2025-12-27 20:18:35 +01:00
llama-context.h llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
llama-cparams.cpp
llama-cparams.h server : support unified cache across slots (#16736) 2025-11-02 18:14:04 +02:00
llama-grammar.cpp llama : add token matching support to llama-grammar (#17816) 2025-12-09 00:32:57 -06:00
llama-grammar.h llama : add token matching support to llama-grammar (#17816) 2025-12-09 00:32:57 -06:00
llama-graph.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-graph.h graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-hparams.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
llama-hparams.h model: support MiMo-V2-Flash (#18328) 2025-12-24 23:07:08 +01:00
llama-impl.cpp llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
llama-impl.h ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00
llama-io.cpp
llama-io.h
llama-kv-cache-iswa.cpp kv-cache : pad the cache size to 256 for performance (#17046) 2025-11-07 20:03:25 +02:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp kv-cache: Fix state restore fragmented cache (#17982) 2025-12-15 19:28:35 +02:00
llama-kv-cache.h kv-cache: Fix state restore fragmented cache (#17982) 2025-12-15 19:28:35 +02:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp memory: Hybrid context shift (#17009) 2025-11-10 17:14:23 +02:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp llama : Async DirectIO model loading on Linux (#18012) 2025-12-18 08:27:19 +02:00
llama-mmap.h llama : Async DirectIO model loading on Linux (#18012) 2025-12-18 08:27:19 +02:00
llama-model-loader.cpp model : Granite Embedding support (#15641) 2025-12-23 00:28:19 +01:00
llama-model-loader.h model : Granite Embedding support (#15641) 2025-12-23 00:28:19 +01:00
llama-model-saver.cpp
llama-model-saver.h
llama-model.cpp model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
llama-model.h llama: fix magic number of 999 for GPU layers (#18266) 2025-12-27 20:18:35 +01:00
llama-quant.cpp llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
llama-quant.h
llama-sampling.cpp common : restore grammar-based rejection sampling (#18137) 2025-12-17 19:46:00 +02:00
llama-sampling.h
llama-vocab.cpp model : Granite Embedding support (#15641) 2025-12-23 00:28:19 +01:00
llama-vocab.h model : add AfmoeForCausalLM support (#16477) 2025-11-14 13:54:10 +01:00
llama.cpp llama-fit-params: fix step size for last device (#18415) 2025-12-28 10:52:09 +01:00
unicode-data.cpp
unicode-data.h
unicode.cpp fix: prevent segfault in tokenizer on highly repetitive input (#17786) 2025-12-05 13:52:23 +02:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00