TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

An easy way to concurrency and parallelism with Python stdlib

105 点作者 olsgaarddk超过 1 年前

15 条评论

cle超过 1 年前
I recently have been doing--what should be--straightforward subprocess work in Python, and the experience is infuriatingly bad. There are so many options for launching subprocesses and communicating with them, and each one has different caveats and undocumented limitations, especially around edge cases like processes crashing, timing out, killing them, if they are stuck in native code outside of the VM, etc.<p>For example, some high-level options include Popen, multiprocessing.Process, multiprocessing.Pool, futures.ProcessPoolExecutor, and huge frameworks like Ray.<p>multiprocessing.Process includes some pickling magic and you can pick from multiprocessing.Pipe and multiprocessing.Queue, but you need to use either multiprocessing.connection.wait() or select.select() to read the process sentinel simultaneously in case the process crashes. Which one? Well connection.wait() will not be interrupted by an OS signal. It&#x27;s unclear why I would ever use connection.wait() then, is there some tradeoff I don&#x27;t know about?<p>For my use cases, process reuse would have been nice to be able to reuse network connections and such (useful even for a single process). Then you&#x27;re looking at either multiprocessing.Pool or futures.ProcessPoolExecutor. They&#x27;re very similar, except some bug fixes have gone into futures.ProcessPoolExecutor but not multiprocessing.Pool because...??? For example, if your subprocess exits uncleanly, multiprocessing.Pool will just hang, whereas futures.ProcessPoolExecutor will raise a BrokenProcessPool and the pool will refuse to do any more work (both of these are unreasonable behaviors IMO). Timing out and forcibly killing the subprocess is its own adventure for each of these too. I don&#x27;t care about a result anymore after some time period passes, and they may be stuck in C code so I just want to whack the process and move on, but that is not very trivial with these.<p>What a nightmarish mess! So much for &quot;There should be one--and preferably only one--obvious way to do it&quot;...my God.<p>(I probably got some details wrong in the above rant, because there are so many to keep track of...)<p>My learning: there is no &quot;easy way to [process] parallelism&quot; in Python. There are many different ways to do it, and you need to know all the nuances of each and how they address your requirements to know whether you can reuse existing high-level impls or you need to write your own low-level impl.
评论 #37511116 未加载
评论 #37508832 未加载
评论 #37508868 未加载
评论 #37515136 未加载
评论 #37512898 未加载
tedivm超过 1 年前
I know this article is all about the stdlib, but having built multiple multiprocess applications with python I eventually built a library, QuasiQueue to simplify the process. I&#x27;ve written a few applications with it already.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;tedivm&#x2F;quasiqueue">https:&#x2F;&#x2F;github.com&#x2F;tedivm&#x2F;quasiqueue</a>
samsquire超过 1 年前
Thank you for the article.<p>I use multiprocessing and I am looking forward to the GIL removal.<p>I would really like library writers and parallelism experts to think on modelling computation in such a way that arbitrary programs - written in this notation - can be sped up without thinking about async or parallelism or low level synchronization primitives spreading throughout the codebase, increasing its cognitive load for everybody.<p>If you&#x27;re doing business programming and you&#x27;re using python Threads or Processes directly, I think we&#x27;re operating against the wrong level of abstraction because our tools are not sufficiently abstract enough. (it&#x27;s not your error, it&#x27;s just not ideal where our industry is at)<p>I am not an expert but parallelism, coroutines, async is my hobby that I journal about all the time. I think a good approach to parallelism is to split you program into a tree dataflow and never synchronize. Shard everything.<p>If I have a single integer value that I want to scale throughput of updates to it by × hardware threads in my multicore and SMT CPU, I can split the integer by that number and apply updates in parallel. (You have £1000 in a bank account and 8 hardware threads you split the account into 8 bank accounts and each store £125, then you can serve 8 transactions simultaneously at a time) Then periodically, those threads can post their value to another buffer (ringbuffer) and then a thread that services that ringbuffer can sum them all for a global view. This provides an eventually consistent view of an integer without slowing down throughput.<p>Unfortunately multithreading becomes a distributed system and then you need consensus.<p>I am working on barriers inspired by bulk synchronous parallel where you have parallel phases and synchronization phases and an async pipeline syntax (see my previous HN comments for notes on this async syntax)<p>My goal would be that business logic can be parallelised without you needing to worry about synchronization.
评论 #37508022 未加载
potta_coffee超过 1 年前
If I need concurrency these days, I just write it in Golang. My primary use for Python was one off scripts for cloud management &#x2F; automation tasks. Today I write maybe 70% Golang and 30% Python.
评论 #37518339 未加载
pdimitar超过 1 年前
Does not seem exactly like an easy way to me. Not super hard, surely, but not &quot;easy&quot;. More like &quot;moderately easy to do and a bit annoying to implement&quot;.<p>Probably 20% of the effort shown in this post could have been expended to just write something very similar in Golang, and it would have taken less time, too. Because the way I see it this is trying to emulate futures &#x2F; promises (and it looks like it&#x27;s succeeding, at least on the surface). That can spiral out of comfortable maintainable code territory pretty quickly.<p>But especially for something as trivial as a crawler, I don&#x27;t see the appeal of Python. You got a good deal of languages with lower friction for doing parallel stuff nowadays (Golang, Elixir, Rust if you want to cry a bit, hell, even Lua has some parallel libraries nowadays, Zig, Nim...).
评论 #37507533 未加载
评论 #37508167 未加载
capital_guy超过 1 年前
This is a really nice little guide. Much thanks to the author. Sometimes you just need to hit a bunch of APIs independently and don&#x27;t want to switch your entire architecture around to do so.
Lukeisun超过 1 年前
Awesome article, use it a lot in a python project at work and it&#x27;s quite nice how simple it is. I&#x27;m trying to replicate the python code but in Rust and it is slightly slower, more than likely my fault though as I&#x27;m new to Rust.
slig超过 1 年前
Is there a way to add tasks with independent timeouts using only the Python stdlib? I was reading a piece of code yesterday that had `pebble` as dependency and it looked like it was only needed for the `pool.schedule(..., timeout=1)`.
hot_gril超过 1 年前
The article shows how to use ThreadPoolExecutor, but that&#x27;s not fully parallel. For that, you need multiprocessing.Pool, which is slightly easier to use anyway, unless your data happens to be non-pickle-able.
thisisauserid超过 1 年前
When dinking around in Ipython you need to use a fork for the &quot;multiprocessing&quot; library called &quot;multiprocess.&quot;<p>Parallelism in a Notebook isn&#x27;t for everyone, but how would these changes affect it?
crabbone超过 1 年前
&gt; For those, Python actually comes with pretty decent tools: the pool executors.<p>Delusion level: max.<p>You have to be in a very, very bad place when this marginal improvement over absolute horror-show that bare Process offers seemed &quot;pretty decent&quot;.<p>Python doesn&#x27;t have good tools for parallelism &#x2F; concurrency. It doesn&#x27;t have average tools. It doesn&#x27;t have even bad tools. It has the worst. Though, unfortunately, it&#x27;s not the only language in this category :(
评论 #37514389 未加载
评论 #37514416 未加载
smallerfish超过 1 年前
Maybe I missed it, but how do the threads circumvent the GIL?<p>&gt; When a request is waiting on the network, another thread is executing.<p>I&#x27;m guessing this is the meat, but what controls that? What other operations allow the GIL to switch to another thread?
评论 #37509051 未加载
评论 #37509032 未加载
eachro超过 1 年前
So what is the consensus view on how to do parallelism in python if you just have something that is embarassingly parallel with no communication between processes necessary?
评论 #37515217 未加载
评论 #37514994 未加载
akasakahakada超过 1 年前
Don&#x27;t see MPI. Can skip this article.
评论 #37514338 未加载
评论 #37515194 未加载
hleszek超过 1 年前
The easiest and modern way is simply to use asyncio...
评论 #37507780 未加载
评论 #37514364 未加载