TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Concurrent Programming, with Examples

330 点作者 begriffs大约 5 年前

11 条评论

bmn__大约 5 年前
Next article in the series: now that you know about the dangerous&#x2F;complicated primitives, don&#x27;t ever touch them again. Instead use the high-level safe concurrency&#x2F;parallelism mechanisms in your programming language: futures&#x2F;promises, nurseries, channels, observers, actors, monitors. Ideally, these should be built-in, but a library whose API composes well into most programs will also do.<p>Data races can be statically removed by carefully restricting certain parts of the language design, see Pony. <a href="https:&#x2F;&#x2F;tutorial.ponylang.io&#x2F;#what-s-pony-anyway" rel="nofollow">https:&#x2F;&#x2F;tutorial.ponylang.io&#x2F;#what-s-pony-anyway</a><p>Bonus: learn aspects of deadlocking by playing a game: <a href="https:&#x2F;&#x2F;deadlockempire.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deadlockempire.github.io&#x2F;</a>
评论 #22676385 未加载
评论 #22676031 未加载
评论 #22675082 未加载
评论 #22674698 未加载
Random_ernest大约 5 年前
The article is very nice, thanks a lot for it. Especially since I hear the word concurrency and parallelism often thrown around without any distinction.<p>Very off topic, but I have read several times the argument that the rise of functional programming is due to it&#x27;s easy concurrency (since functions don&#x27;t have side effects) and that concurrency becomes more and more important due to moores law being dead (i.e. we can&#x27;t scale the hardware up, we have to add cores to our processors).<p>Could someone with more experience comment on that? Is concurrency really easier in functional languages and is the rising importance of concurrency a valid reason to look into functional programming?
评论 #22672862 未加载
评论 #22672901 未加载
评论 #22672937 未加载
评论 #22673993 未加载
评论 #22672992 未加载
评论 #22676668 未加载
评论 #22672798 未加载
评论 #22674632 未加载
评论 #22673667 未加载
rrss大约 5 年前
Does anyone know the history behind the distinction between concurrency and parallelism presented here? The most frequent reference I see is Pike&#x27;s &quot;Concurrency is not parallelism&quot; talk, but I&#x27;m curious who first came up with this distinction.
评论 #22675482 未加载
01100011大约 5 年前
&gt; The sched_yield() puts the calling thread to sleep and at the back of the scheduler’s run queue.<p>Not necessarily, but it is fine for this purpose I suppose. See <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21959692" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21959692</a><p>Glad to see lock hierarchies mentioned. Barriers are new to me so that was nice.<p>IMO, it would be nice to at least have a mention of lock-free techniques and their advantages and disadvantages.
评论 #22673396 未加载
评论 #22673138 未加载
评论 #22675160 未加载
inaseer大约 5 年前
There is a good body of knowledge around dealing with concurrency issues within a single process. We&#x27;ve tools (locks, semaphores ...) to deal with the complexity as well as programming paradigms which help us write code which minimizes data races. It&#x27;s interesting to realize that in a world with an increasing number of micro-services manipulating shared resources (a shared database, shared cloud resources), or even multiple nodes backing a single micro-service all reading and writing to shared resources, similar concurrency bugs arise all the time. Unlike a single process where you can use locks and other primitives to write correct code, there is no locking mechanism we can use to protect access to these global shared resources. We have to be more thoughtful so we write correct code in the presence of pervasive concurrency, which is easier said than done.
评论 #22678300 未加载
highhedgehog大约 5 年前
Is anyone aware of good examples that can be used to explain and implement parallelism&#x2F;concurrency that are not the bankers? I have seen it too many times.
评论 #22673110 未加载
jayd16大约 5 年前
No mention of volatile variables or the concept of stale cpu cache reads when a value is written to from another core. I think its a pretty common and fundamental concept that should be in a write up such as this.
评论 #22674610 未加载
评论 #22675123 未加载
评论 #22673002 未加载
thallukrish大约 5 年前
My experience is, single threaded execution and being able to replicate that with local data for each instance and remote lookup at a fine grained data level when needed, is a more easier way to maintain the code. Cocurrency and all those synchronisation is damn hard to code and debug.
评论 #22675398 未加载
评论 #22674931 未加载
评论 #22675189 未加载
latrasis大约 5 年前
Thank you for the great read! Wondering how io_uring would be put in place of this situation...would be very interested in the authors review: <a href="https:&#x2F;&#x2F;kernel.dk&#x2F;io_uring.pdf" rel="nofollow">https:&#x2F;&#x2F;kernel.dk&#x2F;io_uring.pdf</a>
Jahak大约 5 年前
Interesting article and a great blog
moring大约 5 年前
I&#x27;m a bit disappointed that the article doesn&#x27;t explain the need for a memory&#x2F;consistency model and how it interacts with CPU caches. Locks are the easy part, and the article makes you think that with them you can now write at least simple concurrent programs.<p>Why is that? I&#x27;m pretty sure that the author&#x27;s intention is not to equip the readers with the tools to make buggy programs, yet that is exactly what happens here.
评论 #22673012 未加载
评论 #22674033 未加载