Rudi Servo
b0091ecc1e
docker : added all CPU to GPU images ( #12749 )
2025-04-10 01:17:12 +02:00
Chenguang Li
6e1c4cebdb
CANN: Support Opt CONV_TRANSPOSE_1D and ELU ( #12786 )
...
* [CANN] Support ELU and CONV_TRANSPOSE_1D
* [CANN]Modification review comments
* [CANN]Modification review comments
* [CANN]name adjustment
* [CANN]remove lambda used in template
* [CANN]Use std::func instead of template
* [CANN]Modify the code according to the review comments
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-09 14:04:14 +08:00
Xuan-Son Nguyen
bd3f59f812
cmake : enable curl by default ( #12761 )
...
* cmake : enable curl by default
* no curl if no examples
* fix build
* fix build-linux-cross
* add windows-setup-curl
* fix
* shell
* fix path
* fix windows-latest-cmake*
* run: include_directories
* LLAMA_RUN_EXTRA_LIBS
* sycl: no llama_curl
* no test-arg-parser on windows
* clarification
* try riscv64 / arm64
* windows: include libcurl inside release binary
* add msg
* fix mac / ios / android build
* will this fix xcode?
* try clearing the cache
* add bunch of licenses
* revert clear cache
* fix xcode
* fix xcode (2)
* fix typo
2025-04-07 13:35:19 +02:00
Georgi Gerganov
68ff663a04
repo : update links to new url ( #11886 )
...
* repo : update links to new url
ggml-ci
* cont : more urls
ggml-ci
2025-02-15 16:40:57 +02:00
Georgi Gerganov
dbc2ec59b5
docker : drop to CUDA 12.4 ( #11869 )
...
* docker : drop to CUDA 12.4
* docker : update readme [no ci]
2025-02-14 14:48:40 +02:00
R0CKSTAR
bd6e55bfd3
musa: bump MUSA SDK version to rc3.1.1 ( #11822 )
...
* musa: Update MUSA SDK version to rc3.1.1
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: Remove workaround in PR #10042
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-02-13 13:28:18 +01:00
Xuan-Son Nguyen
d0c08040b6
ci : fix build CPU arm64 ( #11472 )
...
* ci : fix build CPU arm64
* failed, trying ubuntu 22
* vulkan: ubuntu 24
* vulkan : jammy --> noble
2025-01-29 00:02:56 +01:00
Nuno
d7d1eccacc
docker: allow installing pip packages system-wide ( #11437 )
...
Signed-off-by: rare-magma <rare-magma@posteo.eu>
2025-01-28 14:17:25 +00:00
Nuno
f643120bad
docker: add perplexity and bench commands to full image ( #11438 )
...
Signed-off-by: rare-magma <rare-magma@posteo.eu>
2025-01-28 10:42:32 +00:00
Xuan Son Nguyen
caf773f249
docker : fix ARM build and Vulkan build ( #11434 )
...
* ci : do not fail-fast for docker
* build arm64/amd64 separatedly
* fix pip
* no fast fail
* vulkan: try jammy
2025-01-26 22:45:32 +01:00
Nuno
6f53d8a6b4
docker: add missing vulkan library to base layer and update to 24.04 ( #11422 )
...
Signed-off-by: rare-magma <rare-magma@posteo.eu>
2025-01-26 18:22:43 +01:00
Diego Devesa
6e264a905b
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for ( #11419 )
2025-01-25 17:22:41 +01:00
Diego Devesa
20a758155b
docker : fix CPU ARM build ( #11403 )
...
* docker : fix CPU ARM build
* add CURL to other builds
2025-01-25 15:22:29 +01:00
Rudi Servo
7c0e285858
devops : add docker-multi-stage builds ( #10832 )
2024-12-22 23:22:58 +01:00
Evgeny Kurnevsky
e52aba537a
nix: allow to override rocm gpu targets ( #10794 )
...
This allows to reduce compile time when you are building for a single GPU.
2024-12-14 10:17:36 -08:00
Corentin REGAL
11e07fd63b
fix: graceful shutdown for Docker images ( #10815 )
2024-12-13 18:23:50 +01:00
Diego Devesa
59f4db1088
ggml : add predefined list of CPU backend variants to build ( #10626 )
...
* ggml : add predefined list of CPU backend variants to build
* update CPU dockerfiles
2024-12-04 14:45:40 +01:00
Diego Devesa
3420909dff
ggml : automatic selection of best CPU backend ( #10606 )
...
* ggml : automatic selection of best CPU backend
* amx : minor opt
* add GGML_AVX_VNNI to enable avx-vnni, fix checks
2024-12-01 16:12:41 +01:00
R0CKSTAR
249cd93da3
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make ( #10516 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-11-26 17:00:41 +01:00
Xuan Son Nguyen
45abe0f74e
server : replace behave with pytest ( #10416 )
...
* server : replace behave with pytest
* fix test on windows
* misc
* add more tests
* more tests
* styling
* log less, fix embd test
* added all sequential tests
* fix coding style
* fix save slot test
* add parallel completion test
* fix parallel test
* remove feature files
* update test docs
* no cache_prompt for some tests
* add test_cache_vs_nocache_prompt
2024-11-26 16:20:18 +01:00
Johannes Gäßler
75207b3a88
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
Romain Biessy
57f8355b29
sycl: Update Intel docker images to use DPC++ 2025.0 ( #10305 )
2024-11-15 13:10:45 +02:00
Chenguang Li
231f9360d9
cann: dockerfile and doc adjustment ( #10302 )
...
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2024-11-15 15:09:35 +08:00
Diego Devesa
ae8de6d50a
ggml : build backends as libraries ( #10256 )
...
* ggml : build backends as libraries
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: R0CKSTAR <xiaodong.ye@mthreads.com>
2024-11-14 18:04:35 +01:00
R0CKSTAR
cf8e0a3bb9
musa: add docker image support ( #9685 )
...
* mtgpu: add docker image support
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* mtgpu: enable docker workflow
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-10-10 20:10:37 +02:00
serhii-nakon
6f1d9d71f4
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS ( #9641 )
...
* Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS
* Set ROCM_DOCKER_ARCH as string due it incorrectly build and cause OOM exit code
2024-09-30 20:57:12 +02:00
slaren
048de848ee
docker : fix missing binaries in full-cuda image ( #9278 )
2024-09-02 18:11:13 +02:00
Tushar
9c1ba55733
build(nix): Package gguf-py ( #5664 )
...
* style: format with nixfmt/rfc101-style
* build(nix): Package gguf-py
* build(nix): Refactor to new scope for gguf-py
* build(nix): Exclude gguf-py from devShells
* build(nix): Refactor gguf-py derivation to take in exact deps
* build(nix): Enable pytestCheckHook and pythonImportsCheck for gguf-py
* build(python): Package python scripts with pyproject.toml
* chore: Cleanup
* dev(nix): Break up python/C devShells
* build(python): Relax pytorch version constraint
Nix has an older version
* chore: Move cmake to nativeBuildInputs for devShell
* fmt: Reconcile formatting with rebase
* style: nix fmt
* cleanup: Remove unncessary __init__.py
* chore: Suggestions from review
- Filter out non-source files from llama-scripts flake derivation
- Clean up unused closure
- Remove scripts devShell
* revert: Bad changes
* dev: Simplify devShells, restore the -extra devShell
* build(nix): Add pyyaml for gguf-py
* chore: Remove some unused bindings
* dev: Add tiktoken to -extra devShells
2024-09-02 14:21:01 +03:00
Echo Nolan
a47667cff4
nix: fix CUDA build - replace deprecated autoAddOpenGLRunpathHook
...
The CUDA nix build broke when we updated nixpkgs in
8cd1bcfd3fc9f2b5cbafd7fb7581b3278acec25f. As far as I can tell all
that happened is cudaPackages.autoAddOpenGLRunpathHook got moved to
pkgs.autoAddDriverRunpath. This commit fixes it.
2024-08-31 08:44:21 +00:00
slaren
66b039a501
docker : update CUDA images ( #9213 )
2024-08-28 13:20:36 +02:00
Xuan Son Nguyen
a77feb5d71
server : add some missing env variables ( #9116 )
...
* server : add some missing env variables
* add LLAMA_ARG_HOST to server dockerfile
* also add LLAMA_ARG_CONT_BATCHING
2024-08-27 11:07:01 +02:00
wangshuai09
cfac111e2b
cann: add doc for cann backend ( #8867 )
...
Co-authored-by: xuedinge233 <damow890@gmail.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2024-08-19 16:46:38 +08:00
Brandon Squizzato
0d6fb52be0
Install curl in runtime layer ( #8693 )
2024-08-04 20:17:16 +02:00
Someone
268c566006
nix: cuda: rely on propagatedBuildInputs ( #8772 )
...
Listing individual outputs no longer necessary to reduce the runtime closure size after https://github.com/NixOS/nixpkgs/pull/323056 .
2024-07-30 13:35:30 -07:00
Xuan Son Nguyen
be6d7c0791
examples : remove finetune
and train-text-from-scratch
( #8669 )
...
* examples : remove finetune and train-text-from-scratch
* fix build
* update help message
* fix small typo for export-lora
2024-07-25 10:39:04 +02:00
Joe Todd
f19bf99c01
Build Llama SYCL Intel with static libs ( #8668 )
...
Ensure SYCL CI builds both static & dynamic libs for testing purposes
Signed-off-by: Joe Todd <joe.todd@codeplay.com>
2024-07-24 14:36:00 +01:00
Al Mochkin
b3283448ce
build : Fix docker build warnings ( #8535 ) ( #8537 )
2024-07-17 20:21:55 +02:00
bandoti
17eb6aa8a9
vulkan : cmake integration ( #8119 )
...
* Add Vulkan to CMake pkg
* Add Sycl to CMake pkg
* Add OpenMP to CMake pkg
* Split generated shader file into separate translation unit
* Add CMake target for Vulkan shaders
* Update README.md
* Add make target for Vulkan shaders
* Use pkg-config to locate vulkan library
* Add vulkan SDK dep to ubuntu-22-cmake-vulkan workflow
* Clean up tabs
* Move sudo to apt-key invocation
* Forward GGML_EXTRA_LIBS to CMake config pkg
* Update vulkan obj file paths
* Add shaderc to nix pkg
* Add python3 to Vulkan nix build
* Link against ggml in cmake pkg
* Remove Python dependency from Vulkan build
* code review changes
* Remove trailing newline
* Add cflags from pkg-config to fix w64devkit build
* Update README.md
* Remove trailing whitespace
* Update README.md
* Remove trailing whitespace
* Fix doc heading
* Make glslc required Vulkan component
* remove clblast from nix pkg
2024-07-13 18:12:39 +02:00
Armen Kaleshian
8a4441ea1a
docker : fix filename for convert-hf-to-gguf.py in tools.sh ( #8441 )
...
Commit b0a4699 changed the name of this script from convert-hf-to-gguf.py to
convert_hf_to_gguf.py breaking how convert is called from within a Docker
container.
2024-07-12 11:08:19 +03:00
compilade
3fd62a6b1c
py : type-check all Python scripts with Pyright ( #8341 )
...
* py : type-check all Python scripts with Pyright
* server-tests : use trailing slash in openai base_url
* server-tests : add more type annotations
* server-tests : strip "chat" from base_url in oai_chat_completions
* server-tests : model metadata is a dict
* ci : disable pip cache in type-check workflow
The cache is not shared between branches, and it's 250MB in size,
so it would become quite a big part of the 10GB cache limit of the repo.
* py : fix new type errors from master branch
* tests : fix test-tokenizer-random.py
Apparently, gcc applies optimisations even when pre-processing,
which confuses pycparser.
* ci : only show warnings and errors in python type-check
The "information" level otherwise has entries
from 'examples/pydantic_models_to_grammar.py',
which could be confusing for someone trying to figure out what failed,
considering that these messages can safely be ignored
even though they look like errors.
2024-07-07 15:04:39 -04:00
Michael Francis
3840b6f593
nix : enable curl ( #8043 )
...
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-07-01 14:47:04 +03:00
Georgi Gerganov
257f8e41e2
nix : remove OpenCL remnants ( #8235 )
...
* nix : remove OpenCL remnants
* minor : remove parentheses
2024-07-01 14:46:18 +03:00
Georgi Gerganov
0e814dfc42
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA ( #8139 )
...
ggml-ci
2024-06-26 19:32:07 +03:00
Georgi Gerganov
f3f65429c4
llama : reorganize source code + improve CMake ( #8006 )
...
* scripts : update sync [no ci]
* files : relocate [no ci]
* ci : disable kompute build [no ci]
* cmake : fixes [no ci]
* server : fix mingw build
ggml-ci
* cmake : minor [no ci]
* cmake : link math library [no ci]
* cmake : build normal ggml library (not object library) [no ci]
* cmake : fix kompute build
ggml-ci
* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE
ggml-ci
* move public backend headers to the public include directory (#8122 )
* move public backend headers to the public include directory
* nix test
* spm : fix metal header
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* scripts : fix sync paths [no ci]
* scripts : sync ggml-blas.h [no ci]
---------
Co-authored-by: slaren <slarengh@gmail.com>
2024-06-26 18:33:02 +03:00
joecryptotoo
925c30956d
Add healthchecks to llama-server containers ( #8081 )
...
* added healthcheck
* added healthcheck
* added healthcheck
* added healthcheck
* added healthcheck
* moved curl to base
* moved curl to base
2024-06-25 17:13:27 +02:00
Olivier Chafik
1c641e6aac
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
...
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df481fd8936cd7d098e3065d7de378930.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
2024-06-13 00:41:52 +01:00
Meng, Hengyu
dcf752707d
update intel docker oneapi-basekit to 2024.1.1-devel-ubuntu22.04 ( #7894 )
...
In addition this reverts a workaround we had to do to workaround the upstream issue with expired intel GPG package keys in 2024.0.1-devel-ubuntu22.04
2024-06-12 19:05:35 +10:00
slaren
2d08b7fbb4
docker : build only main and server in their images ( #7782 )
...
* add openmp lib to dockerfiles
* build only main and server in their docker images
2024-06-06 08:19:49 +03:00
slaren
d67caea0d6
docker : add openmp lib ( #7780 )
2024-06-06 08:17:21 +03:00
JohnnyB
9022c33646
Fixed painfully slow single process builds. ( #7326 )
...
* Fixed painfully slow single process builds.
* Added nproc for systems that don't default to nproc
2024-05-30 22:32:38 +02:00