llama_cpp_for_radxa_dragon_.../src
Tarek Dakhran 3a59971967
model : add label for LiquidAI LFM2-2.6B model (#16204)
* model : add label for LiquidAI LFM2-2.6B model

HF link: [LiquidAI/LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B).

Support for GGUF conversion and inference is added in #14620.

However, due to similar `n_embd`, it identifies as a 1.2B model.
Fix the label by using `n_ff` to identify the model instead.

Output of `llama-bench`:
```
| model                          |       size |     params | backend    | threads |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2 1.2B F16                  |   2.18 GiB |     1.17 B | CPU        |      10 |           pp512 |        223.97 ± 5.32 |
| lfm2 2.6B F16                  |   4.79 GiB |     2.57 B | CPU        |      10 |           pp512 |         92.53 ± 4.14 |
| lfm2 350M F16                  | 676.25 MiB |   354.48 M | CPU        |      10 |           pp512 |       725.52 ± 11.70 |
| lfm2 700M F16                  |   1.38 GiB |   742.49 M | CPU        |      10 |           pp512 |       336.22 ± 12.93 |
```

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-24 13:42:26 +02:00
..
CMakeLists.txt kv-cache : drop the "unified" prefix (#15467) 2025-08-21 17:00:33 +03:00
llama-adapter.cpp aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-adapter.h aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-arch.cpp Add LLaDA-7b-MoE diffusion model (#16003) 2025-09-16 10:38:28 +08:00
llama-arch.h Add LLaDA-7b-MoE diffusion model (#16003) 2025-09-16 10:38:28 +08:00
llama-batch.cpp perplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304) 2025-08-14 14:03:30 +03:00
llama-batch.h llama : reuse compute graphs (#14482) 2025-07-17 19:08:33 +03:00
llama-chat.cpp model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama-chat.h model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama-context.cpp model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama-context.h llama : separate compute buffer reserve from fattn check (#15696) 2025-08-31 15:49:03 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : bump max seq limit from 64 to 256 (#15916) 2025-09-18 12:47:56 +03:00
llama-grammar.cpp server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
llama-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama-graph.cpp model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama-graph.h llama : add support for EmbeddingGemma 300m (#15798) 2025-09-04 18:10:29 +02:00
llama-hparams.cpp kv-cache : fix SWA checks + disable cacheless iSWA (#15811) 2025-09-05 10:39:22 +03:00
llama-hparams.h convert : add Llama4ForCausalLM (#16042) 2025-09-17 19:18:21 +02:00
llama-impl.cpp
llama-impl.h llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp kv-cache : fix SWA checks + disable cacheless iSWA (#15811) 2025-09-05 10:39:22 +03:00
llama-kv-cache-iswa.h kv-cache : support layer reuse (#15504) 2025-08-24 13:07:07 +03:00
llama-kv-cache.cpp model : avoid ggml_cont_3d for fused QKV weights (#15662) 2025-09-08 10:25:33 +03:00
llama-kv-cache.h model : avoid ggml_cont_3d for fused QKV weights (#15662) 2025-09-08 10:25:33 +03:00
llama-kv-cells.h llama : remove KV cache defragmentation logic (#15473) 2025-08-22 12:22:13 +03:00
llama-memory-hybrid.cpp kv-cache : fix SWA checks + disable cacheless iSWA (#15811) 2025-09-05 10:39:22 +03:00
llama-memory-hybrid.h kv-cache : fix SWA checks + disable cacheless iSWA (#15811) 2025-09-05 10:39:22 +03:00
llama-memory-recurrent.cpp kv-cache : support layer reuse (#15504) 2025-08-24 13:07:07 +03:00
llama-memory-recurrent.h kv-cache : support layer reuse (#15504) 2025-08-24 13:07:07 +03:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h kv-cache : support layer reuse (#15504) 2025-08-24 13:07:07 +03:00
llama-mmap.cpp llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013) 2025-06-05 11:57:42 +02:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp nvidia nemotron nano v2 (nemotronh) (#15507) 2025-08-28 18:39:31 -06:00
llama-model-loader.h model: support GLM 4.5 family of models (#14939) 2025-08-04 20:29:25 +02:00
llama-model-saver.cpp llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp model : add label for LiquidAI LFM2-2.6B model (#16204) 2025-09-24 13:42:26 +02:00
llama-model.h model : add label for LiquidAI LFM2-2.6B model (#16204) 2025-09-24 13:42:26 +02:00
llama-quant.cpp llama-quant : fix the verification of attention layers for encoder-decoder models (#16023) 2025-09-17 09:30:55 +02:00
llama-quant.h
llama-sampling.cpp sampling : optimize dist sampler (#15704) 2025-09-03 18:16:26 +03:00
llama-sampling.h
llama-vocab.cpp Add LLaDA-7b-MoE diffusion model (#16003) 2025-09-16 10:38:28 +08:00
llama-vocab.h model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama.cpp ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
unicode-data.cpp
unicode-data.h
unicode.cpp model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00
unicode.h model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00