Skip to content Skip to sidebar Skip to footer

Avoiding Deadlocks Due To Queue Overflow With Multiprocessing.joinablequeue

Suppose we have a multiprocessing.Pool where worker threads share a multiprocessing.JoinableQueue, dequeuing work items and potentially enqueuing more work: def worker_main(queue):

Solution 1:

Depending on how often process(work) creates more items, there may be no solution beside a queue of an infinite maximum size.

In short, your queue must be large enough to accomodate the entire backlog of work items that you can have at any time.


Since queue is implemented with semaphores, there may indeed be a hard size limit of SEM_VALUE_MAX which in MacOS is 32767. So you'll need to subclass that implementation or use put(block=False) and handle queue.Full (e.g. put excess items somewhere else) if that's not enough.

Alternatively, look at one of the 3rd-party implementations of distributed work item queue for Python.

Post a Comment for "Avoiding Deadlocks Due To Queue Overflow With Multiprocessing.joinablequeue"