mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-19 21:16:06 +00:00

There are couple things in this architecture: 1. Shared input and output embedding parameters. 2. Key length and value length are not derived from `n_embd`. More information about the models can be found at https://ai.google.dev/gemma. GGUFs can be downloaded from https://huggingface.co/google.