This website requires JavaScript.
Explore
Help
Sign In
pingu_98
/
llama_cpp_for_radxa_dragon_wing_q6a
Watch
1
Star
0
Fork
You've already forked llama_cpp_for_radxa_dragon_wing_q6a
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
098dbaab44
llama_cpp_for_radxa_dragon_...
/
examples
History
Georgi Gerganov
58308a0ecc
server : fix metrics init (
#5964
)
2024-03-09 17:34:15 +02:00
..
baby-llama
batched
batched-bench
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml
embedding
export-lora
finetune
gguf
imatrix
infill
jeopardy
llama-bench
llama.android
llama.swiftui
llava
lookahead
lookup
main
main-cmake-pkg
parallel
passkey
perplexity
quantize
quantize-stats
save-load-state
server
server : fix metrics init (
#5964
)
2024-03-09 17:34:15 +02:00
simple
speculative
sycl
tokenize
train-text-from-scratch
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
gpt4all.sh
json-schema-to-grammar.py
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
server-embd.py
server-llama2-13B.sh