TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

“Is Parallel Programming Hard, and, If So, What Can You Do About It?” v2 Is Out

197 点作者 vkaku大约 4 年前

10 条评论

bob1029大约 4 年前
The hardest part of parallel programming is tricking yourself into believing you need to worry about it at all.<p>9&#x2F;10 times, if you use the right tools &amp; techniques, a single x86 core (assuming you have access to a &#x27;real&#x27; one) can easily chew through millions of business transactions per second and easily satisfy all technical requirements for whatever business system.<p>Latency is the ultimate devil. Getting your synchronous state to fit into L1&#x2F;2 and processing transactions in (micro)batches of contiguous data is how you win the game. The closer your data is to your compute, the faster you can go.<p>The biggest problem most developers are struggling with right now is that they got tricked into scattering their business systems across many computers and potentially across multiple physical regions. The latency in L1 is measured in nanoseconds. The latency between 2 datacenters is measured in milliseconds. This is <i>SIX</i> orders of magnitude difference. Even if you squander 3 of those to some bad programming practices, you are still 1000x better off on a single box using a single thread than if you scattered your infra to the seven winds.<p>See: LMAX Disruptor for a case study in what the opposite end of the &quot;infrastructure isnt my problem&quot; spectrum looks like.
评论 #26561681 未加载
评论 #26560556 未加载
thorn大约 4 年前
Please, refrain from judging people here. I would rather prefer if we talked about this wonderful book that was released for free for our own enjoyment. I cannot even grasp the volume of work that went into this book.
评论 #26555221 未加载
dragontamer大约 4 年前
Counterpoint: Parallel programming isn&#x27;t very hard.<p>Seriously: try fork() + wait(), or spawn in bash &#x2F; cmd line. make -j offers parallelism (and Ninja is &quot;make -j by default&quot;).<p>OpenMP&#x27;s &quot;#pragma omp parallel for&quot; is extremely easy to use.<p>Things get harder with thread libraries, but the &quot;parallelism&quot; part of pthreads is pretty easy. pthread_create(&amp;thread_id, NULL, function, args);, at least if you&#x27;re fine with the default settings.<p>C++ Threads are even easier than pthreads.<p>-----------<p>The hard part is:<p>1. Communication &#x2F; Synchronization -- But really, condition variables are the most complicated thing most people need. fork() + wait() push the difficult communication part to the wait() or to pipes. But if you stick with wait() &#x2F; waitpid() as your main communication mechanism, its pretty easy to think about.<p>2. High performance programming -- This is just hard, especially if you begin to reach for the highest performance &quot;memory barrier &#x2F; lock-free programming&quot; styles. This is where &quot;false sharing&quot; comes in (your code correctly executes under false sharing, but slowly due to how the L1 cache handles multicore architectures).<p>---------<p>However, if you stick with a simple fork-join communication model (ex: pthread_join, or wait()&#x2F;waitpid() based synchronization)... its really not too hard at all. Just try to stay away from the high-performance techniques unless you really need them.
评论 #26556693 未加载
评论 #26558431 未加载
评论 #26555416 未加载
评论 #26556922 未加载
评论 #26565046 未加载
Dowwie大约 4 年前
More free materials on concurrency, including a book near the top published in 2016: <a href="https:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~djg&#x2F;teachingMaterials&#x2F;spac" rel="nofollow">https:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~djg&#x2F;teachingMaterials&#x2F;spac</a>
fearthetelomere大约 4 年前
One of the biggest issues I encounter with concurrency is whether the code I&#x27;ve written could be optimized to reduce cache misses.<p>I feel like a lot of resources including this wonderful book gloss over reducing cache misses. Is it really that trivial? What are the tips and tricks involved?
评论 #26559474 未加载
raspasov大约 4 年前
Am I the only person that is surprised that this book does not mention immutability even once?
评论 #26559151 未加载
3JPLW大约 4 年前
A thread from 2020 (v2 RC 1): <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=22030928" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=22030928</a><p>2015: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9315152" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9315152</a><p>2014: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7381877" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7381877</a><p>2011: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=2784515" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=2784515</a>
Nasrudith大约 4 年前
It sounds like a dumb tautology but parallel programming is easy for parallel problems. It is harder to say maintain several interlinked grid cells in a simulation in parallel than it is to partion a non-interacting set to N cores to run in parallel and let the main thread know when everything is done.
belgesel大约 4 年前
A question to parallel programming experts. Can we assume that a task that is well seperated to perform on single core is always better than multi-core?<p>I meant using multiple instances working on single core vs. single instance with multi threads.
评论 #26557221 未加载
评论 #26565329 未加载
评论 #26557074 未加载
bullen大约 4 年前
I think Joint Parallel is hard, as in when two threads on two different cores are trying to write+read to+from the same memory.<p>My solutions are:<p>1) Use Java on the server; it&#x27;s GC:ed VM has a complex memory model that can handle fast lock-free concurrency relatively well, even using OO though you&#x27;ll hit cache-misses.<p>2) Use C on the client, primitive variable types are generally atomic so if you know what you are doing (I don&#x27;t because I haven&#x27;t gotten that far yet) you should be able to joint parallelize cores with char, int and float arrays as data, also avoiding cache-misses!<p>There are many more arguments for this division, Java doesn&#x27;t crash (usually) and C is truly portable (you cannot run Java on certain devices iOS, Switch, PS4).
评论 #26556512 未加载
评论 #26556345 未加载
评论 #26560609 未加载