llama_cpp_for_radxa_dragon_.../examples
Yann Follet 646ef4a9cf
embedding : more cli arguments (#7458)
* add parameters for embeddings
--embd-normalize
--embd-output-format
--embd-separator
description in the README.md

* Update README.md

fix tipo

* Trailing whitespace

* fix json generation, use " not '

* fix merge master

* fix code formating
group of parameters // embedding
print usage for embedding parameters

---------

Co-authored-by: Brian <mofosyne@gmail.com>
2024-06-24 08:30:24 +03:00
..
baby-llama
batched
batched-bench
batched.swift
benchmark
convert-llama2c-to-ggml
cvector-generator cvector: fix CI + correct help message (#8064) 2024-06-22 18:11:30 +02:00
embedding embedding : more cli arguments (#7458) 2024-06-24 08:30:24 +03:00
eval-callback
export-lora
finetune
gbnf-validator
gguf
gguf-split
gritlm llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
imatrix
infill
jeopardy
llama-bench
llama.android
llama.swiftui swiftui : enable stream updating (#7754) 2024-06-21 08:30:58 +03:00
llava
lookahead
lookup
main
main-cmake-pkg
parallel
passkey
perplexity
quantize Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058) 2024-06-22 15:16:10 +02:00
quantize-stats
retrieval llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
rpc
save-load-state
server server : fix JSON-Scheme typo (#7975) 2024-06-23 11:03:08 -04:00
simple
speculative
sycl [SYCL] Fix windows build and inference (#8003) 2024-06-20 21:19:05 +08:00
tokenize
train-text-from-scratch
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert-legacy-llama.py
json-schema-pydantic-example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
regex-to-grammar.py
server-embd.py
server-llama2-13B.sh
ts-type-to-grammar.sh