llama_cpp_for_radxa_dragon_.../examples
2024-01-07 17:59:01 +01:00
..
baby-llama ggml : change ggml_scale to take a float instead of tensor (#4573) 2023-12-21 23:20:49 +02:00
batched
batched-bench
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
embedding
export-lora ggml : change ggml_scale to take a float instead of tensor (#4573) 2023-12-21 23:20:49 +02:00
finetune finetune : remove unused includes (#4756) 2024-01-04 21:45:37 +02:00
gguf gguf : simplify example dependencies 2023-12-21 23:08:14 +02:00
infill
jeopardy
llama-bench llama-bench : add no-kv-offload parameter (#4812) 2024-01-07 17:59:01 +01:00
llama.swiftui llama.swiftui : use llama.cpp as SPM package (#4804) 2024-01-07 10:20:50 +02:00
llava clip : refactor + bug fixes (#4696) 2023-12-30 23:24:42 +02:00
lookahead
lookup lookup : add prompt lookup decoding example (#4484) 2023-12-22 18:05:56 +02:00
main
main-cmake-pkg main-cmake-pkg : fix build issue (#4665) 2023-12-29 16:18:20 +02:00
metal
parallel
perplexity
quantize
quantize-stats
save-load-state
server server : fix n_predict check (#4798) 2024-01-07 08:45:26 +02:00
simple
speculative
tokenize
train-text-from-scratch ggml : change ggml_scale to take a float instead of tensor (#4573) 2023-12-21 23:20:49 +02:00
alpaca.sh
base-translate.sh examples : improve base-translate.sh script (#4783) 2024-01-06 11:40:24 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt lookup : add prompt lookup decoding example (#4484) 2023-12-22 18:05:56 +02:00
gpt4all.sh
json-schema-to-grammar.py
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
reason-act.sh
server-llama2-13B.sh