I'm developing a custom tool that needs to coordinate multiple long-running worker processes for task execution without relying on external brokers like Redis or RabbitMQ. We want to avoid multiprocessing.Pool due to the setup time required for spawning new processes. Instead, we prefer persistent, warmed-up worker processes that can handle tasks efficiently.<p>Would ZeroMQ still be the best choice for this, or are there other alternatives with minimal overhead?
Ok, kinda sounds like mixing apples and oranges.<p>Are the multiple long-running worker processes working on distinct tasks (aka each is the equivalent of starting a distinct individual process/program with no dependencies)<p>?? worker process that are awaiting data to process ??<p>simplified related bash shell examples :<p>aka guarantee sequencial process running : <a href="https://unix.stackexchange.com/questions/305039/pausing-a-bash-script-until-previous-commands-are-finished" rel="nofollow">https://unix.stackexchange.com/questions/305039/pausing-a-ba...</a><p>vs.<p>wait for all processes to end (can be modified to continual request for next thing(s) to do) : <a href="https://stackoverflow.com/questions/356100/how-to-wait-in-bash-for-several-subprocesses-to-finish-and-return-exit-code-0" rel="nofollow">https://stackoverflow.com/questions/356100/how-to-wait-in-ba...</a>