llama_cpp_for_radxa_dragon_.../examples
Marius Gerdes 77f9c6bbe5
server : Add verbose output to OAI compatible chat endpoint. (#12246)
Add verbose output to server_task_result_cmpl_final::to_json_oaicompat_chat_stream, making it conform with server_task_result_cmpl_final::to_json_oaicompat_chat, as well as the other to_json methods.
2025-03-23 19:30:26 +01:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding
eval-callback
export-lora
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench
llama.android
llama.swiftui
llava
lookahead
lookup
main
parallel
passkey
perplexity
quantize
quantize-stats
retrieval
rpc
run
save-load-state
server
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
tokenize
tts
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh