mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-20 05:26:07 +00:00

* CPU/CUDA: Gemma 2 FlashAttention support * apply logit_softcap to scale in kernel * disable logit softcapping tests on Metal * remove metal check