llama_cpp_for_radxa_dragon_.../tools
Georgi Gerganov cd5e3b5754
server : support unified cache across slots (#16736)
* server : support unified context across slots

* cont : fix speculative decoding initialization

* context : fix n_ctx_per_seq computation

* server : purge slots one by one

* tests : add unified cache server tests

* llama : update per-seq context computation

* test-thread-safety : handle tiny training context of the input model

* server : fix server_tokens clear()

* server : use 4 slots + unified KV by default

* llama : add note about context size queries

* cont : update todos [no ci]

* context : do not cap the size of the context

* tests : adjust parameters to be CI friendlier

* context : add warning
2025-11-02 18:14:04 +02:00
..
batched-bench scripts : add script to bench models (#16894) 2025-11-02 00:15:31 +02:00
cvector-generator
export-lora
gguf-split
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench llama-bench : clarify benchmarked parts of the computation (#16823) 2025-10-28 19:41:43 +02:00
main
mtmd mtmd: refactor preprocessing + support max/min pixels (#16878) 2025-11-01 15:51:36 +01:00
perplexity
quantize
rpc
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server server : support unified cache across slots (#16736) 2025-11-02 18:14:04 +02:00
tokenize
tts
CMakeLists.txt