llama_cpp_for_radxa_dragon_.../examples
2024-12-28 16:09:19 +01:00
..
batched sampling : refactor + optimize penalties sampler (#10803) 2024-12-16 12:31:14 +02:00
batched-bench ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
batched.swift
convert-llama2c-to-ggml make : deprecate (#10514) 2024-12-02 21:22:53 +02:00
cvector-generator examples, ggml : fix GCC compiler warnings (#10983) 2024-12-26 14:59:11 +01:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
eval-callback ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
export-lora examples, ggml : fix GCC compiler warnings (#10983) 2024-12-26 14:59:11 +01:00
gbnf-validator llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-hash ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-split remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
gritlm server : output embeddings for all tokens when pooling = none (#10861) 2024-12-18 13:01:41 +02:00
imatrix make : deprecate (#10514) 2024-12-02 21:22:53 +02:00
infill readme : add option, update default value, fix formatting (#10271) 2024-12-03 12:50:08 +02:00
jeopardy
llama-bench remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
llama.android ggml : fix arm build (#10890) 2024-12-18 23:21:42 +01:00
llama.swiftui llama : use cmake for swift build (#10525) 2024-12-08 13:14:54 +02:00
llava clip : disable GPU support (#10896) 2024-12-19 18:47:15 +02:00
lookahead ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
lookup ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
main sampling : refactor + optimize penalties sampler (#10803) 2024-12-16 12:31:14 +02:00
main-cmake-pkg ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
passkey ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
perplexity ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
quantize Update README.md (#10772) 2024-12-11 16:16:32 +01:00
quantize-stats ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
retrieval server : output embeddings for all tokens when pooling = none (#10861) 2024-12-18 13:01:41 +02:00
rpc rpc-server : add support for the SYCL backend (#10934) 2024-12-23 10:39:30 +02:00
run llama-run : include temperature option (#10899) 2024-12-23 01:21:40 +01:00
save-load-state ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
server server: added more docs for response_fields field (#10995) 2024-12-28 16:09:19 +01:00
simple ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
simple-chat ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
speculative ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
speculative-simple ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
sycl
tokenize remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
tts tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh
chat.sh
CMakeLists.txt tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh