I've spent a lot of time writing and debugging multiprocessing code, so a few thoughts, besides the general idea that this looks good and I'm excited to try it:<p>- automatic restarting of workers after N task is very nice, I have had to hack that into places before because of (unresolveable) memory leaks in application code<p>- is there a way to attach a debugger to one of the workers? That would be really useful, though I appreciate the automatic reporting of the failing args (also hack that in all the time)<p>- often, the reason a whole set of jobs is not making any progress is because of thundering herd on reading files (god forbid over NFS). It would be lovely to detect that using lsof or something similar<p>- it would also be extremely convenient to have an option that handles a Python MemoryError and scales down the parallelism in that case; this is quite difficult but would help a lot since I often have to run a "test job" to see how much parallelism I can actually use<p>- I didn't see the library use threadpoolctl anywhere; would it be possible to make that part of the interface so we can limit thread parallelism from OpenMP/BLAS/MKL when multiprocessing? This also often causes core thrashing<p>Sorry for all the asks, and feel free to push back to keep the interface clean. I will give the library a try regardless.