TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Resource efficient Thread Pools with Zig

305 点作者 komuW超过 3 年前

16 条评论

scott_s超过 3 年前
I designed and implemented a mostly lock-free dynamic thread scheduler for streaming runtimes, and I learned some similar lessons: avoid global data and amortize the necessary synchronization that you have to do. One of the main peculiarities of a streaming context is that work-stealing is counter-productive. In a streaming context, it&#x27;s more like <i>cutting in</i> when the work would be done anyway. It&#x27;s better to go find a part of the streaming graph that is not currently being executed than to steal some from another thread.<p>The paper describing my design is &quot;Low-Synchronization, Mostly Lock-Free, Elastic Scheduling for Streaming Runtimes&quot;, <a href="https:&#x2F;&#x2F;www.scott-a-s.com&#x2F;files&#x2F;pldi2017_lf_elastic_scheduling.pdf" rel="nofollow">https:&#x2F;&#x2F;www.scott-a-s.com&#x2F;files&#x2F;pldi2017_lf_elastic_scheduli...</a>. The source code for the product implementation is now open source. Most is in <a href="https:&#x2F;&#x2F;github.com&#x2F;IBMStreams&#x2F;OSStreams&#x2F;blob&#x2F;main&#x2F;src&#x2F;cpp&#x2F;SPL&#x2F;Runtime&#x2F;ProcessingElement&#x2F;ScheduledQueue.h" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;IBMStreams&#x2F;OSStreams&#x2F;blob&#x2F;main&#x2F;src&#x2F;cpp&#x2F;SP...</a> and <a href="https:&#x2F;&#x2F;github.com&#x2F;IBMStreams&#x2F;OSStreams&#x2F;blob&#x2F;main&#x2F;src&#x2F;cpp&#x2F;SPL&#x2F;Runtime&#x2F;ProcessingElement&#x2F;ScheduledQueue.cpp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;IBMStreams&#x2F;OSStreams&#x2F;blob&#x2F;main&#x2F;src&#x2F;cpp&#x2F;SP...</a>.
Jarred超过 3 年前
Running the benchmarks locally (zig is fastest, but that&#x27;s probably expected)<p>zig (~1 week older than HEAD):<p><pre><code> .&#x2F;zig-out&#x2F;bin&#x2F;qsort filling shuffling running took 177.46ms </code></pre> rust nightly:<p><pre><code> cargo run --release --bin qsort warning: the cargo feature `edition2021` has been stabilized in the 1.56 release and is no longer necessary to be listed in the manifest See https:&#x2F;&#x2F;doc.rust-lang.org&#x2F;nightly&#x2F;cargo&#x2F;reference&#x2F;manifest.html#the-edition-field for more information about using this feature. Finished release [optimized] target(s) in 0.02s Running `target&#x2F;release&#x2F;qsort` filling shuffling running took 896.91656ms cargo run --release --bin rsort warning: the cargo feature `edition2021` has been stabilized in the 1.56 release and is no longer necessary to be listed in the manifest See https:&#x2F;&#x2F;doc.rust-lang.org&#x2F;nightly&#x2F;cargo&#x2F;reference&#x2F;manifest.html#the-edition-field for more information about using this feature. Finished release [optimized] target(s) in 0.02s Running `target&#x2F;release&#x2F;rsort` filling shuffling running took 212.190694ms </code></pre> go 1.16.3:<p><pre><code> go run qsort.go filling shuffling running took 222.90356ms </code></pre> on macOS 11.5.2 with a 2.4 GHz 8-Core Intel Core i9
评论 #28509955 未加载
评论 #28510937 未加载
评论 #28511239 未加载
评论 #28510813 未加载
评论 #28512939 未加载
评论 #28510307 未加载
评论 #28511907 未加载
评论 #28512400 未加载
gpderetta超过 3 年前
To be very pedantic, Vyukov MPSC queue, while indeed awesome in its simplicity and performance, is not technically lock-free, not even obstruction free (and Vyukov never claimed it as such [1]): a producer halting between the atomic swap and atomic store will prevent further nodes enqueued by other producers from ever being dequeued by the consumer.<p>[1] edit: and in fact there is a note exactly on this on the linked Vyukov page.
评论 #28510218 未加载
jorangreef超过 3 年前
Incredible post. Ticks all the boxes and exciting to see this coming to Zig. Protty has got to be a domain specialist when it comes to atomics and threads.
评论 #28509986 未加载
omazurov超过 3 年前
Increasing the size of the array 10x (100_000_000) and filling a glaring omission:<p>Go (go1.17.1 darwin&#x2F;amd64)<p><pre><code> took 5.591593544s took 5.285948722s took 5.218750076s took 5.239224787s took 5.131232207s </code></pre> Zig (0.9.0-dev.959+f011f1393)<p><pre><code> took 3.48s took 3.37s took 3.44s took 3.43s took 3.49s </code></pre> Java (openjdk version &quot;17-ea&quot;)<p><pre><code> Time: 2684 ms Time: 2654 ms Time: 2840 ms Time: 2676 ms Time: 2649 ms </code></pre> MacBook Pro Early 2013, 2.7 GHz Quad-Core Intel Core i7, 16 GB 1600 MHz DDR3
评论 #28517469 未加载
kevingadd超过 3 年前
It&#x27;s always great to see a robust new threadpool. Most of the stock ones I&#x27;ve used have horrible characteristics - for example on my Threadripper lots of production apps will spawn 48 or more threadpool threads and the scheduling&#x2F;throughput characteristics on that are generally pretty dire because they needlessly compete for resources, etc. They tend to allocate garbage for every work item too, which makes them a bad choice for realtime scenarios. It&#x27;s genuinely frustrating that most processes on my machine have 128 threads or more due to unintelligently scaling off ProcessorCount without an upper limit. For example, GIMP&#x27;s bad threading policy means that operations like resizing an image take <i>seconds</i> instead of 100ms or less.<p>I&#x27;ve been using a custom threadpool for a while now but writing your own without the expertise necessary can be a real pain.
评论 #28509749 未加载
评论 #28510211 未加载
评论 #28510346 未加载
egberts1超过 3 年前
This is the first four-state-machine thread algorithm.<p>And I like what I am seeing. Hopefully all transitions between states have been tested.<p>Good work.
评论 #28514199 未加载
lmb超过 3 年前
What does tail latency for the Zig pool look like? It seems like any Task that ends up in one of the overflow queues will stay there until at some ring buffer is emptied.<p>Put another way, during a period of more push than pop some tasks may see a very long delay before being worked on?
评论 #28523986 未加载
jedisct1超过 3 年前
Pretty impressive achievement!
r0f1超过 3 年前
I know this is off-topic, but does this website not look eerily familiar to dev.to?
评论 #28510343 未加载
评论 #28510181 未加载
评论 #28510155 未加载
feffe超过 3 年前
A pet peeve of mine is calling something using atomic operations lock less. It&#x27;s very common but once it click that atomic operations are just locks on the instruction level it feels very wrong to call most algorithms lock less. There&#x27;s still the same concerns to avoid contention when using atomic operations as with regular mutexes. A mutex lock in the uncontended case never enters the kernel and is just an atomic operation... The main strategy regardless of if using locks or atomics directly is to avoid contention.<p>The only truly lock less algorithm that I&#x27;m aware of (for most CPU archs) is RCU in the read case.
评论 #28515080 未加载
评论 #28511808 未加载
StreamBright超过 3 年前
Is there a IoC &#x2F; CSP (ala core.async in Clojure) in Zig yet? Is it planned to have something like this? I could not find too much about that online.
评论 #28513887 未加载
goodpoint超过 3 年前
How does it compare to Nim&#x27;s Weave thread manager?
评论 #28512348 未加载
randyrand超过 3 年前
please have Thread Priority a native feature. See GCD.
posharma超过 3 年前
How does this compare to OpenMP? Does it support nested parallelism? (disclaimer: haven&#x27;t read the post thoroughly).
ur-whale超过 3 年前
How does it compare with a bare-metal C++ threadpool?
评论 #28509740 未加载
评论 #28510103 未加载