mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-19 13:06:10 +00:00

When using group query attention, we have one workgroup per KV batch and this can be very few workgroups (e.g. just 8 in some models). Enable split_k to spread the work across SMs. This helps a lot when the KV cache is large.