llama_cpp_for_radxa_dragon_.../examples
Ed Addario 71e90e8813
quantize: Handle user-defined quantization levels for additional tensors (#12511)
* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Minor refactoring as per the contributors' coding guidelines

* Update descriptions to match existing style

* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Minor refactoring as per the contributors' guidelines

* Implement general --tensor-type instead of tensor-specific command option

* Fix implied type bug

* Restore missing #includes

* Add regex capability for tensor selection

* Refactor function name and update ALLOWED_TENSOR_TYPE

* Add missing #include

* Handle edge case when tensor name is cls.output

* Minor logging improvement
2025-04-13 21:29:28 +03:00
..
batched common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
batched-bench common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding
eval-callback
export-lora common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split gguf-split : --merge now respects --dry-run option (#12681) 2025-04-04 16:09:12 +02:00
gritlm common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
imatrix
infill
jeopardy
llama-bench
llama.android cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
llama.swiftui
llava llava: Fix cpu-only clip image encoding sefault (#12907) 2025-04-12 07:29:03 +02:00
lookahead
lookup
main docs : bring llama-cli conversation/template docs up-to-date (#12426) 2025-03-17 21:14:32 +01:00
parallel llama : refactor kv cache guard (#12695) 2025-04-02 14:32:59 +03:00
passkey common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
perplexity hellaswag: display estimated score confidence interval (#12797) 2025-04-07 18:47:08 +03:00
quantize quantize: Handle user-defined quantization levels for additional tensors (#12511) 2025-04-13 21:29:28 +03:00
quantize-stats
retrieval
rpc common : Define cache directory on AIX (#12915) 2025-04-12 17:33:39 +02:00
run contrib: support modelscope community (#12664) 2025-04-11 14:01:56 +02:00
save-load-state
server server : add VSCode's Github Copilot Chat support (#12896) 2025-04-11 23:37:41 +03:00
simple
simple-chat
simple-cmake-pkg
speculative common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
speculative-simple common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
sycl cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
tokenize
tts common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py llama : fix FA when KV cache is not used (i.e. embeddings) (#12825) 2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh