-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coverity builds broken #3723
Comments
When I installed the new Jenkins workspace, I downloaded a more recent version of Coverity that I deployed on the existing machines too (and I updated the Jenkins job). This is probably the compilation going out of memory. I saw the same symptoms on Fedora hosts. The kernel kills the entire process tree when it happens, including the Jenkins agent. |
The Equinix machines (where the job was running) have more CPU/RAM (#3597 (comment)) than the IBM machine (#3597 (comment)) which I put back online a few hours ago. The job is running
which for the Equinix machine was 16 (which would appear to be 2 threads per each of the 8 cores). We could possibly set The IBM machine is 2 vCPUs/4 GB RAM, which is more like the regular test machines -- maybe adding 2GB swap like we did for the test machines would be sufficient, although the job is tending to prefer running on test-equinix-ubuntu2204-x64-1. Or maybe we can be more drastic and shift the job to the Hetzner benchmark machines? I forget if there's a reason these had to run on the |
Here's a run with hardcoded |
That build passed. I suggest to keep the hardcoded value until a better solution is implemented. |
Two most recent node-daily-coverity builds have failed.
Some error about the agent going offline, no obvious other error.
e.g. https://ci.nodejs.org/view/Node.js%20Daily/job/node-daily-coverity/3010/console
The text was updated successfully, but these errors were encountered: