llama_cpp_for_radxa_dragon_.../examples
Benson Wong 5d01670266
server : include speculative decoding stats when timings_per_token is enabled (#12603)
* Include speculative decoding stats when timings_per_token is true

New fields added to the `timings` object:

  - draft_n           : number of draft tokens generated
  - draft_accepted_n  : number of draft tokens accepted
  - draft_accept_ratio: ratio of accepted/generated

* Remove redundant draft_accept_ratio var

* add draft acceptance rate to server console output
2025-03-28 10:05:44 +02:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding
eval-callback
export-lora
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench
llama.android
llama.swiftui
llava clip: Fix llama-llava-clip-quantize-cli quantization error under CUDA backend (#12566) 2025-03-26 15:06:04 +01:00
lookahead
lookup
main docs : bring llama-cli conversation/template docs up-to-date (#12426) 2025-03-17 21:14:32 +01:00
parallel
passkey
perplexity
quantize
quantize-stats
retrieval
rpc rpc : update README for cache usage (#12620) 2025-03-28 09:44:13 +02:00
run run: de-duplicate fmt and format functions and optimize (#11596) 2025-03-25 18:46:11 +01:00
save-load-state
server server : include speculative decoding stats when timings_per_token is enabled (#12603) 2025-03-28 10:05:44 +02:00
simple
simple-chat
simple-cmake-pkg
speculative speculative : fix seg fault in certain cases (#12454) 2025-03-18 19:35:11 +02:00
speculative-simple
sycl
tokenize
tts llama-tts : avoid crashes related to bad model file paths (#12482) 2025-03-21 11:12:45 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh