TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Understanding Concurrency, Parallelism and JavaScript

100 pointsby kugurerdem8 months ago

7 comments

necovek8 months ago
The one missed distinction is that concurrent tasks <i>can</i> be executing in parallel, it just doesn&#x27;t imply they are or aren&#x27;t.<p>Basically, all parallel tasks are also concurrent, but there are concurrent tasks which are not executed in parallel.
评论 #41529740 未加载
评论 #41527847 未加载
duped8 months ago
If you prefer to learn by video, here&#x27;s an excellent talk on the same subject by Rob Pike that I link all the time to people<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=oV9rvDllKEg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=oV9rvDllKEg</a>
评论 #41529174 未加载
评论 #41532031 未加载
rdtsc8 months ago
I like to think of them as different levels. Concurrency is at a higher abstraction level: steps that can execute without needing to wait on each other. Parallelism is a bit lower and reflects the ability to actually execute the steps at the same time.<p>Sometimes you can have concurrent units like multiple threads, but a single CPU, so they won’t execute in parallel. In an environment with multiple CPU they might execute in parallel.
评论 #41531682 未加载
CalRobert8 months ago
I just learned something! I realize now I was talking about parallelism in a recent interview question about concurrency. Oh well.
dragontamer8 months ago
Concurrency often is about running your I&#x2F;O routines in parallel, achieving higher bandwidth. For example, one computer handling 50 concurrent HTTP requests simultaneously.<p>No single HTTP request uses all the CPU power or even your Ethernet bandwidth. The bulk of your waiting is latency issues. So while one task is waiting on Ethernet responses under the hood, the system should do something else.<p>Hard Drives are another: you can have random I&#x2F;O bandwidths of 5MB&#x2F;s or so, but every request always takes 4ms on the average for a 7200 RPM drive (aka: 120 rotations per second, or about 8 miliseconds for a complete rotation. So 4ms on the average for any request to complete).<p>So while waiting for the HDD to respond, your OS can schedule other reads or writes for the head to move to which improves average performance (ex: if 8 requests all are within the path of the head, you&#x27;ll still wait 4ms on the average, but maybe each will be read per 1ms).<p>----------<p>Parallelism is often about CPU limited situations where you use a 2nd CPU (today called a core). For example, if one CPU core is too slow, you can use a 2nd, or 8 or even 128 cores simultaneously.<p>------------<p>Hyperthreads is the CPU designer (Intel and AMD) that the above concurrency technique can apply to modern RAM because a single RAM read is like 50ns or 200 clock ticks. Any RAM-latency problem (ex: linked list traversals) would benefit from the CPU core doing something else while waiting for the RAM latency to respond.<p>-----<p>Different programming languages have different patterns to make these situations easier to program.
bryanrasmussen8 months ago
thinking about this - is there a term for tasks which are partially parallel, that is to say X starts at 0.1 and ends at 1 and Y starts at 0.2 and ends at 0.9 - X and Y are not parallel, but they are something that I&#x27;m not sure what the technical term for is. (this is assuming they are not executed concurrently either of course)
评论 #41530591 未加载
bugtodiffer8 months ago
Learn Go and you will understand concurrency