llama_cpp_for_radxa_dragon_.../tools
Tarek Dakhran ccbc84a537
mtmd: mtmd_audio_streaming_istft (#18645)
Change is decoupled from https://github.com/ggml-org/llama.cpp/pull/18641.

[LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B)
needs streaming istft for generating output audio.

* add streaming ISTFT class (`mtmd_audio_streaming_istft`) with overlap-add for audio reconstruction
* replace global audio cache with per-instance cache, the model requires
  two independent caches, for preprocessing (audio input) and for istft
  (audio output).
* unified templated FFT/IFFT implementation supporting both forward and inverse transforms
2026-01-06 21:00:29 +01:00
..
batched-bench
cli
completion common: fix return value check for setpriority (#18412) 2025-12-29 11:07:49 +02:00
cvector-generator
export-lora
fit-params llama_fit_params: return enum for fail vs. error (#18374) 2025-12-27 09:59:19 +01:00
gguf-split
imatrix
llama-bench common: fix return value check for setpriority (#18412) 2025-12-29 11:07:49 +02:00
mtmd mtmd: mtmd_audio_streaming_istft (#18645) 2026-01-06 21:00:29 +01:00
perplexity
quantize quantize: prevent input/output file collision (#18451) 2025-12-31 23:29:03 +08:00
rpc
run
server server : add thinking content blocks to Anthropic Messages API (#18551) 2026-01-06 16:17:13 +01:00
tokenize
tts
CMakeLists.txt