| .. |
|
models
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
CMakeLists.txt
|
|
|
|
llama-adapter.cpp
|
|
|
|
llama-adapter.h
|
|
|
|
llama-arch.cpp
|
add missing ROPE_FACTORS_LONG/SHORT for MiniCPM (#21150)
|
2026-03-29 19:45:40 +02:00 |
|
llama-arch.h
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-batch.cpp
|
|
|
|
llama-batch.h
|
|
|
|
llama-chat.cpp
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-chat.h
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-context.cpp
|
ggml-backend: re-enable graph reuse with pipeline parallelism (#20927)
|
2026-03-24 20:47:00 +08:00 |
|
llama-context.h
|
|
|
|
llama-cparams.cpp
|
|
|
|
llama-cparams.h
|
|
|
|
llama-ext.h
|
|
|
|
llama-grammar.cpp
|
|
|
|
llama-grammar.h
|
|
|
|
llama-graph.cpp
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-graph.h
|
|
|
|
llama-hparams.cpp
|
|
|
|
llama-hparams.h
|
|
|
|
llama-impl.cpp
|
|
|
|
llama-impl.h
|
|
|
|
llama-io.cpp
|
|
|
|
llama-io.h
|
|
|
|
llama-kv-cache-iswa.cpp
|
|
|
|
llama-kv-cache-iswa.h
|
|
|
|
llama-kv-cache.cpp
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-kv-cache.h
|
|
|
|
llama-kv-cells.h
|
|
|
|
llama-memory-hybrid-iswa.cpp
|
|
|
|
llama-memory-hybrid-iswa.h
|
|
|
|
llama-memory-hybrid.cpp
|
|
|
|
llama-memory-hybrid.h
|
|
|
|
llama-memory-recurrent.cpp
|
|
|
|
llama-memory-recurrent.h
|
|
|
|
llama-memory.cpp
|
|
|
|
llama-memory.h
|
|
|
|
llama-mmap.cpp
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-mmap.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-loader.cpp
|
llama-model-loader: print warning when using overrides with mmap (#20978)
|
2026-03-30 17:40:17 +08:00 |
|
llama-model-loader.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-saver.cpp
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-saver.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model.cpp
|
convert : support Qwen3.5/Qwen3.5 Moe NVFP4 and add input scales (#20505)
|
2026-03-26 16:52:06 +01:00 |
|
llama-model.h
|
convert : support Qwen3.5/Qwen3.5 Moe NVFP4 and add input scales (#20505)
|
2026-03-26 16:52:06 +01:00 |
|
llama-quant.cpp
|
mtmd: fix "v.patch_embd" quant and unsupported im2col ops on Metal for deepseek-ocr (#21027)
|
2026-03-27 00:07:55 +01:00 |
|
llama-quant.h
|
|
|
|
llama-sampler.cpp
|
|
|
|
llama-sampler.h
|
|
|
|
llama-vocab.cpp
|
mtmd: Add DeepSeekOCR Support (#17400)
|
2026-03-25 19:57:40 +01:00 |
|
llama-vocab.h
|
|
|
|
llama.cpp
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
unicode-data.cpp
|
|
|
|
unicode-data.h
|
|
|
|
unicode.cpp
|
|
|
|
unicode.h
|
|
|