Replies: 1 comment 26 replies
-
Well, it could be as simple as there are jobs that stay in active for ever, did you check that that is not the case? maybe your workers are never finishing and therefore no new jobs will be processed. Using a dashboard will help you in getting more insight on what is going on. |
Beta Was this translation helpful? Give feedback.
26 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, we have a fairly standard worker setup, starting the work by instantiating the worker object:
The issue we ran into is, once in a blue moon, the worker would just not process any jobs at all - no error message, nada just silent (i.e. in the code sample above, we will just not get the
job started
log at all when this happens). Restarting will always fix it, but it makes our deployment very unreliable and someone needs to monitor the queue processing correctly at every deploy.he mysterious thing is, we have 4 queues, and 1 queue never runs into this problem, the other 3 are problematic but not always at the same time (sometimes just 1 queue would fail). There is no difference on how the queues were set up, except that the non-problematic queue has only concurrency of 4 instead of 10 and we put significantly more (short) jobs put on it.
Do you have any pointer to where to start investigating this issue? I can confirm that the
new Worker(...)
line was ran because we do get the log line ofworker started
. We also subscribe to the worker errorworker.on('error', ...)
but we get no logs from that either.Beta Was this translation helpful? Give feedback.
All reactions