This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-04-22 20:26:05 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
7ecd780b1a
Branches
Tags
View all branches
llama.cpp
/
ggml
History
Jeff Bolz
7ecd780b1a
vulkan: Use fp16 for the flash attention P*V multiplication (
#12783
)
...
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-09 07:12:57 +02:00
..
cmake
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
include
metal : improve FA + improve MoE (
#12612
)
2025-03-28 20:21:59 +02:00
src
vulkan: Use fp16 for the flash attention P*V multiplication (
#12783
)
2025-04-09 07:12:57 +02:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : add logging for native build options/vars (whisper/2935)
2025-03-30 08:33:31 +03:00