mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-20 05:26:07 +00:00

* add onednn * add sycl_f16 * add dnnl stream * add engine map * use dnnl for intel only * use fp16fp16fp16 * update doc
* add onednn * add sycl_f16 * add dnnl stream * add engine map * use dnnl for intel only * use fp16fp16fp16 * update doc