mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-04-20 05:26:07 +00:00

This commit tries to address/improve an issue with the server tests which are failing with a timeout. Looking at the logs it seems like they are timing out after 12 seconds: ``` FAILED unit/test_chat_completion.py::test_completion_with_json_schema[False-json_schema0-6-"42"] - TimeoutError: Server did not start within 12 seconds ``` This is somewhat strange as in utils.py we have the following values: ```python DEFAULT_HTTP_TIMEOUT = 12 if "LLAMA_SANITIZE" in os.environ or "GITHUB_ACTION" in os.environ: DEFAULT_HTTP_TIMEOUT = 30 def start(self, timeout_seconds: int | None = DEFAULT_HTTP_TIMEOUT) -> None: ``` It should be the case that a test running in a github action should have a timeout of 30 seconds. However, it seems like this is not the case. Inspecting the logs from the CI job we can see the following environment variables: ```console Run cd examples/server/tests 2 cd examples/server/tests 3 ./tests.sh 4 shell: /usr/bin/bash -e {0} 5 env: 6 LLAMA_LOG_COLORS: 1 7 LLAMA_LOG_PREFIX: 1 8 LLAMA_LOG_TIMESTAMPS: 1 9 LLAMA_LOG_VERBOSITY: 10 10 pythonLocation: /opt/hostedtoolcache/Python/3.11.11/x64 ``` This probably does not address the underlying issue that the servers that are providing the models to be downloaded occasionally take a longer time to response but might improve these situations in some cases.