..
baby-llama
ggml : change ggml_scale to take a float instead of tensor ( #4573 )
2023-12-21 23:20:49 +02:00
batched
examples : add passkey test ( #3856 )
2024-01-08 11:14:04 +02:00
batched-bench
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD ( #5240 )
2024-01-31 17:30:17 +02:00
batched.swift
beam-search
benchmark
2-bit quantizations ( #4897 )
2024-01-14 09:45:56 +02:00
convert-llama2c-to-ggml
ggml : remove n_dims from ggml_tensor ( #4469 )
2023-12-14 16:52:08 +01:00
embedding
llama : support batched embeddings ( #5466 )
2024-02-13 14:06:58 +02:00
export-lora
sync : ggml ( #5452 )
2024-02-12 09:16:06 +02:00
finetune
sync : ggml ( #5452 )
2024-02-12 09:16:06 +02:00
gguf
gguf : simplify example dependencies
2023-12-21 23:08:14 +02:00
imatrix
Adding some imatrix tools ( #5302 )
2024-02-04 10:39:58 +02:00
infill
Remove unused data and add fixes ( #5154 )
2024-01-27 15:25:55 +01:00
jeopardy
llama-bench
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
llama.android
android : use release cmake build type by default ( #5123 )
2024-01-25 19:05:51 +02:00
llama.swiftui
llama.swiftui : update models layout ( #4826 )
2024-01-12 14:48:00 +02:00
llava
llava : remove prog parameter from ArgumentParser ( #5457 )
2024-02-12 10:38:44 +02:00
lookahead
english : use typos to fix comments and logs ( #4354 )
2023-12-12 11:53:36 +02:00
lookup
lookup: add print for drafting performance ( #5450 )
2024-02-11 12:44:51 +01:00
main
main : ctrl+C print timing in non-interactive mode ( #3873 )
2024-02-11 15:35:50 +02:00
main-cmake-pkg
main-cmake-pkg : fix build issue ( #4665 )
2023-12-29 16:18:20 +02:00
parallel
passkey
examples : add passkey test ( #3856 )
2024-01-08 11:14:04 +02:00
perplexity
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
quantize
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
quantize-stats
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
save-load-state
llama : minimize size used for state save/load ( #4820 )
2024-01-13 18:29:43 +02:00
server
server : allow to specify tokens as strings in logit_bias ( #5003 )
2024-02-11 15:38:14 +02:00
simple
speculative
speculative : threading options ( #4959 )
2024-01-16 13:04:32 +02:00
sycl
[SYCL] update guide of SYCL backend ( #5254 )
2024-02-02 15:53:27 +08:00
tokenize
train-text-from-scratch
sync : ggml ( #5452 )
2024-02-12 09:16:06 +02:00
alpaca.sh
base-translate.sh
examples : improve base-translate.sh script ( #4783 )
2024-01-06 11:40:24 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
ggml : add unified SYCL backend for Intel GPUs ( #2690 )
2024-01-28 17:56:23 +02:00
gpt4all.sh
json-schema-to-grammar.py
llama.vim
llama.vim : added api key support ( #5090 )
2024-01-23 08:51:27 +02:00
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
pydantic_models_to_grammar.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
reason-act.sh
server-llama2-13B.sh