mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-15 19:16:09 +00:00

* rpc : send hash when tensor data is above some fixed threshold ref #10095 * rpc : put cache under $HOME/.cache/llama.cpp * try to fix win32 build * another try to fix win32 build * remove llama as dependency
5 lines
164 B
CMake
5 lines
164 B
CMake
set(TARGET rpc-server)
|
|
add_executable(${TARGET} rpc-server.cpp)
|
|
target_link_libraries(${TARGET} PRIVATE ggml)
|
|
target_compile_features(${TARGET} PRIVATE cxx_std_17)
|