mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-18 12:36:09 +00:00

* add onednn * add sycl_f16 * add dnnl stream * add engine map * use dnnl for intel only * use fp16fp16fp16 * update doc
* add onednn * add sycl_f16 * add dnnl stream * add engine map * use dnnl for intel only * use fp16fp16fp16 * update doc