..
batched
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
batched-bench
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
batched.swift
llama : llama_perf + option to disable timings during decode ( #9355 )
2024-09-13 09:53:38 +03:00
convert-llama2c-to-ggml
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
cvector-generator
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
deprecation-warning
Update deprecation-warning.cpp ( #10619 )
2024-12-04 23:19:20 +01:00
embedding
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
eval-callback
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
export-lora
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gbnf-validator
llama : minor grammar refactor ( #10897 )
2024-12-19 17:42:13 +02:00
gen-docs
ggml : move AMX to the CPU backend ( #10570 )
2024-11-29 21:54:58 +01:00
gguf
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-hash
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-split
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gritlm
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
imatrix
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
infill
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
jeopardy
llama-bench
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
llama.android
android : fix llama_batch free ( #11014 )
2024-12-30 14:35:13 +02:00
llama.swiftui
llama : use cmake for swift build ( #10525 )
2024-12-08 13:14:54 +02:00
llava
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
lookahead
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
lookup
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
main
llama : use LLAMA_TOKEN_NULL ( #11062 )
2025-01-06 10:52:15 +02:00
main-cmake-pkg
ggml : move AMX to the CPU backend ( #10570 )
2024-11-29 21:54:58 +01:00
parallel
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
passkey
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
perplexity
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
quantize
Update README.md ( #10772 )
2024-12-11 16:16:32 +01:00
quantize-stats
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
retrieval
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
rpc
rpc-server : add support for the SYCL backend ( #10934 )
2024-12-23 10:39:30 +02:00
run
Enhance user input handling for llama-run ( #11138 )
2025-01-08 18:47:05 +00:00
save-load-state
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
server
arg : option to exclude arguments from specific examples ( #11136 )
2025-01-08 12:55:36 +02:00
simple
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
simple-chat
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
speculative
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
speculative-simple
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
sycl
[SYCL]set context default value to avoid memory issue, update guide ( #9476 )
2024-09-18 08:30:31 +08:00
tokenize
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
tts
llama : refactor src/llama.cpp ( #10902 )
2025-01-03 10:18:53 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
scripts : fix pattern and get n_tokens in one go ( #10221 )
2024-11-09 09:06:54 +02:00
chat-vicuna.sh
chat.sh
CMakeLists.txt
tts : add OuteTTS support ( #10784 )
2024-12-18 19:27:21 +02:00
convert_legacy_llama.py
metadata: Detailed Dataset Authorship Metadata ( #8875 )
2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
json_schema_to_grammar.py
grammar : fix JSON Schema for string regex with top-level alt. ( #9903 )
2024-10-16 19:03:24 +03:00
llama.vim
llama.vim : bump generation time limit to 3s [no ci]
2024-10-23 17:16:56 +03:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh