llama_cpp_for_radxa_dragon_.../src
takuya kodama 06332e2867
llama-batch: fix build fails with -Werror=missing-braces (#16614)
## Why it failed

When compiling with strict compiler flags (-Wmissing-braces -Werror=missing-braces),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_CXX_FLAGS="-Wmissing-braces -Werror=missing-braces" && \
cmake --build ../llama.cpp.build/
...
In file included from /home/otegami/work/cpp/llama.cpp/src/llama-graph.h:4,
                 from /home/otegami/work/cpp/llama.cpp/src/llama-model.h:5,
                 from /home/otegami/work/cpp/llama.cpp/src/llama.cpp:8:
/home/otegami/work/cpp/llama.cpp/src/llama-batch.h:126:48: error: missing braces around initializer for 'std::__array_traits<int, 1>::_Type' {aka 'int [1]'} [-Werror=missing-braces]
  126 |     std::array<llama_seq_id, 1> seq_id_0 = { 0 }; // default sequence id
      |                                                ^
cc1plus: some warnings being treated as errors
```

The issue is that std::array initialization requires double braces.

## How to fix

This PR changes `{ 0 }` to `{{ 0 }}` for std::array initialization.

This is part of a series of commits to fix missing braces warnings across the codebase.
- src/llama-batch.h <- This PR is here.
- src/llama-context.cpp
- tests/test-backend-ops.cpp
- tests/test-gguf.cpp
- tools/mtmd/clip.cpp

Benefits:
- std::array is a struct containing a C-style array, requiring nested braces
- Enables stricter compiler warnings to catch potential issues
2025-10-20 11:27:09 +03:00
..
CMakeLists.txt kv-cache : drop the "unified" prefix (#15467) 2025-08-21 17:00:33 +03:00
llama-adapter.cpp aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-adapter.h aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-arch.cpp llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
llama-arch.h llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
llama-batch.cpp perplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304) 2025-08-14 14:03:30 +03:00
llama-batch.h llama-batch: fix build fails with -Werror=missing-braces (#16614) 2025-10-20 11:27:09 +03:00
llama-chat.cpp chat : Granite Docling stopping (#16438) 2025-10-06 18:59:40 +02:00
llama-chat.h model : add grok-2 support (#15539) 2025-09-14 23:00:59 +02:00
llama-context.cpp llama-context: only warn on pooling_type when user specified (#16674) 2025-10-20 10:44:21 +03:00
llama-context.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-cparams.cpp
llama-cparams.h llama : bump max seq limit from 64 to 256 (#15916) 2025-09-18 12:47:56 +03:00
llama-grammar.cpp
llama-grammar.h
llama-graph.cpp metal : FA support F32 K and V and head size = 32 (#16531) 2025-10-13 23:07:57 +03:00
llama-graph.h graph : support cacheless embeddings with FA and iSWA (#16528) 2025-10-13 22:42:37 +03:00
llama-hparams.cpp hparams : add check for layer index in is_recurrent (#16511) 2025-10-12 07:19:06 +02:00
llama-hparams.h model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (#16367) 2025-10-09 09:39:18 +03:00
llama-impl.cpp
llama-impl.h llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
llama-io.cpp
llama-io.h
llama-kv-cache-iswa.cpp server : context checkpointing for hybrid and recurrent models (#16382) 2025-10-03 21:34:51 +03:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp server : host-memory prompt caching (#16391) 2025-10-09 18:54:51 +03:00
llama-kv-cache.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cells.h llama : remove KV cache defragmentation logic (#15473) 2025-08-22 12:22:13 +03:00
llama-memory-hybrid.cpp memory : use sequential equal splits for recurrent modules (#16442) 2025-10-07 08:24:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp server : improve context checkpoint logic (#16440) 2025-10-08 10:57:29 +03:00
llama-memory-recurrent.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp
llama-mmap.h
llama-model-loader.cpp model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
llama-model-loader.h model: support GLM 4.5 family of models (#14939) 2025-08-04 20:29:25 +02:00
llama-model-saver.cpp llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama-model-saver.h
llama-model.cpp model : add Granite Hybrid types (#16635) 2025-10-19 23:54:31 +02:00
llama-model.h model : add Granite Hybrid types (#16635) 2025-10-19 23:54:31 +02:00
llama-quant.cpp llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
llama-quant.h
llama-sampling.cpp vocab : mark EOT token for Granite models (#16499) 2025-10-10 17:17:31 +03:00
llama-sampling.h
llama-vocab.cpp vocab : mark EOT token for Granite models (#16499) 2025-10-10 17:17:31 +03:00
llama-vocab.h model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) 2025-10-05 14:57:47 +02:00
llama.cpp llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
unicode-data.cpp
unicode-data.h
unicode.cpp model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00