llama_cpp_for_radxa_dragon_.../examples
Jorge A efc72253f7
server : add "/chat/completions" alias for "/v1/...` (#5722)
* Add "/chat/completions" as alias for "/v1/chat/completions"

* merge to upstream master

* minor : fix trailing whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-28 10:39:15 +02:00
..
baby-llama code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
batched
batched-bench
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml
embedding
export-lora
finetune code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
gguf
imatrix
infill llama : refactor k-shift implementation + KV defragmentation (#5691) 2024-02-25 22:12:24 +02:00
jeopardy
llama-bench code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
llama.android ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711) 2024-02-25 20:43:00 +02:00
llama.swiftui
llava code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
lookahead
lookup
main llama : refactor k-shift implementation + KV defragmentation (#5691) 2024-02-25 22:12:24 +02:00
main-cmake-pkg
parallel
passkey llama : fix defrag bugs + add parameter (#5735) 2024-02-27 14:35:51 +02:00
perplexity ci : fix wikitext url + compile warnings (#5569) 2024-02-18 22:39:30 +02:00
quantize IQ4_XS: a 4.25 bpw quantization (#5747) 2024-02-27 16:34:24 +02:00
quantize-stats
save-load-state
server server : add "/chat/completions" alias for "/v1/...` (#5722) 2024-02-28 10:39:15 +02:00
simple
speculative
sycl
tokenize
train-text-from-scratch code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
gpt4all.sh
json-schema-to-grammar.py examples : support minItems/maxItems in JSON grammar converter (#5039) 2024-02-19 16:14:07 +02:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
server-llama2-13B.sh