TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

On Java/JVM: Loom and Thread Fairness

98 点作者 lichtenberger大约 3 年前

7 条评论

klabb3大约 3 年前
I only observe Java from a distance but everything I&#x27;ve read about project loom is amazing.<p>Speaking as someone who has worked intricately on an async runtime in another language, I&#x27;ve more and more started to question the premises around the explicitly async paradigm. Lots of things we were &quot;promised&quot; with async turned out to be just as complicated in the threaded (i.e. blocking) programming models, and now we have two incompatible programming models. I&#x27;m now of the opinion that we should improve (instead of replace) our threaded models and do under-the-hood optimizations to sort out the performance issues.<p>If I had a dime for everytime someone made a blocking call from an async function.. And who can blame them? Most blocking functions aren&#x27;t even documented as such, you just have to know.
评论 #31603473 未加载
评论 #31601781 未加载
评论 #31601529 未加载
评论 #31605043 未加载
评论 #31609237 未加载
评论 #31604806 未加载
评论 #31603640 未加载
samsquire大约 3 年前
This is apt timing. I wrote a thought recently of chunking loops and yielding after a certain number of chunks.<p>The Loom scheduler does NOT guarantee resource starvation avoidance for high CPU usage. You need to Thread.yield.<p>I also thought of creating a mapping from synchronous code and rewriting it to a tree of LMAX disruptors. I ported a wait-free ringbuffer from C++ to Java today - it was written by Alexander Krizhanovsky[1]<p>In LMAX disruptor you split each IO request in half - you have a disruptor that enqueuing events events to request the IO and to pass the event on to a callback you enqueue the response on a different disruptor.<p>If a synchronous request handler has 27 lines it represents a tree of 27 disruptor threads independently scaled.<p>We can pipeline the request with 27 threads all in an event loop that pipeline each task × core count and without blocking the servicing thread or other work. So event loops without blocking! So CPU heavy tasks do not block other heavy CPU tasks and do not block the servicing thread and are not blocked by IO.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas4#51-rewrite-synchronous-code-into-lmax-disruptor-thread-pools---event-loops-that-dont-block-on-cpu-usage" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas4#51-rewrite-synchronous-c...</a><p>[1]: <a href="https:&#x2F;&#x2F;www.linuxjournal.com&#x2F;content&#x2F;lock-free-multi-producer-multi-consumer-queue-ring-buffer" rel="nofollow">https:&#x2F;&#x2F;www.linuxjournal.com&#x2F;content&#x2F;lock-free-multi-produce...</a>
评论 #31605042 未加载
papercrane大约 3 年前
The &quot;State of Loom&quot; document from May 2020 discusses this sort of problem. Long-term I believe the solution is to allow preempting at any safe-point and custom schedulers, but neither of those will be delivered in the first preview.<p><a href="https:&#x2F;&#x2F;cr.openjdk.java.net&#x2F;~rpressler&#x2F;loom&#x2F;loom&#x2F;sol1_part2.html" rel="nofollow">https:&#x2F;&#x2F;cr.openjdk.java.net&#x2F;~rpressler&#x2F;loom&#x2F;loom&#x2F;sol1_part2....</a>
BenoitP大约 3 年前
Not a new kind of issue, and not a problem IMHO.<p>Some GCs need to reach a safepoint before doing their work; and when a classic Thread is chugging along in a tight loop, that thread is blocking the whole system for a pending GC.<p>One hacky fix is to provide an opportunity for a safepoint (System.out.print every 10k iterations). Other tools like JVM command line options allow for observing safepoint triggering.<p>The same goes for virtual threads: inserting a Thread.sleep(1L) every nth iteration of a tight loop does it.<p>Also there was talk specifying a custom scheduler for the virtual threads. That would not help in inserting switching opportunities, but it would give a way to define what kind of fairness you want.
评论 #31605514 未加载
评论 #31605843 未加载
评论 #31601958 未加载
lichtenberger大约 3 年前
The mentioned twitter discussion is also pretty interesting IMHO :-)
anonymousDan大约 3 年前
Can anyone comment on the duebugability of Loom (e.g. how do stack traces work)? Can Loom threads jump between native threads? Is it easy to understand what is going on when I run all this under a debugger? Having worked with some user level threading systems in the past as part of some libos work the the fact the tooling wasn&#x27;t really set up to handle such issues made it really difficult to understand what was going on when we ran into problems.
评论 #31606184 未加载
评论 #31606019 未加载
ferdowsi大约 3 年前
The Twitter discussion mentions this:<p>&gt; We can forcefully preempt virtual threads at any safepoint poll point<p>What is a &quot;safepoint poll point&quot;? A sleep call?
评论 #31604319 未加载
评论 #31612860 未加载