llama.cpp/examples/llava/README-gemma3.md
Xuan-Son Nguyen 267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
* (wip) refactor downloading system [no ci]

* fix all examples

* fix mmproj with -hf

* gemma3: update readme

* only handle mmproj in llava example

* fix multi-shard download

* windows: fix problem with std::min and std::max

* fix 2
2025-04-01 23:44:05 +02:00

1.0 KiB

Gemma 3 vision

Important

This is very experimental, only used for demo purpose.

Quick started

You can use pre-quantized model from ggml-org's Hugging Face account

# build
cmake -B build
cmake --build build --target llama-gemma3-cli

# alternatively, install from brew (MacOS)
brew install llama.cpp

# run it
llama-gemma3-cli -hf ggml-org/gemma-3-4b-it-GGUF
llama-gemma3-cli -hf ggml-org/gemma-3-12b-it-GGUF
llama-gemma3-cli -hf ggml-org/gemma-3-27b-it-GGUF

# note: 1B model does not support vision

How to get mmproj.gguf?

cd gemma-3-4b-it
python ../llama.cpp/examples/llava/gemma3_convert_encoder_to_gguf.py .

# output file is mmproj.gguf

How to run it?

What you need:

  • The text model GGUF, can be converted using convert_hf_to_gguf.py
  • The mmproj file from step above
  • An image file
# build
cmake -B build
cmake --build build --target llama-gemma3-cli

# run it
./build/bin/llama-gemma3-cli -m {text_model}.gguf --mmproj mmproj.gguf --image your_image.jpg