mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-17 20:16:09 +00:00

* CUDA: use async data loading for FlashAttention --------- Co-authored-by: Diego Devesa <slarengh@gmail.com>
* CUDA: use async data loading for FlashAttention --------- Co-authored-by: Diego Devesa <slarengh@gmail.com>