llama_cpp_for_radxa_dragon_.../tools
Daniel Bevenius 70cd37dbbe
requirements : update transformers/torch for Embedding Gemma (#15828)
* requirements : update transformers/torch for Embedding Gemma

This commit updates the requirements to support converting
Embedding Gemma 300m models.

The motivation for this change is that during development I had a local
copy of the transformers package which is what I used for converting
the models. This was a mistake on my part and I should have also updated
my transformers version to the official release.

I had checked the requirements/requirements-convert_legacy_llama.txt
file and noted that the version was >=4.45.1,<5.0.0 and came to the
conculusion that no updated would be needed, this assumed that
Embedding Gemma would be in a transformers release at the time
Commit fb15d649ed ("llama : add support
for EmbeddingGemma 300m (#15798)) was merged. So anyone wanting to
convert themselves would be able to do so. However, Embedding Gemma is
a preview release and this commit updates the requirements to use this
preview release.

* resolve additional python dependencies

* fix pyright errors in tokenizer test and remove unused import
2025-09-09 06:06:52 +02:00
..
batched-bench batched-bench : fix llama_synchronize usage during prompt processing (#15835) 2025-09-08 10:27:07 +03:00
cvector-generator
export-lora
gguf-split
imatrix
llama-bench llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
main cli : change log to warning to explain reason for stopping (#15604) 2025-08-28 10:48:20 +03:00
mtmd requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
perplexity
quantize
rpc
run
server requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
tokenize
tts sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
CMakeLists.txt