TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Faster hash joiner with vectorized execution

120 点作者 jordanlewis超过 6 年前

9 条评论

angelachang27超过 6 年前
Hey everyone, I'm the intern who did this work, happy to answer any questions if you have them!
评论 #19049346 未加载
评论 #19050287 未加载
评论 #19052808 未加载
obl超过 6 年前
The &quot;core&quot; of the trick is nice : amortizing interpreter dispatch over many items. (ignoring the column layout&#x2F;SIMD stuff which basically helps in any case)<p>Essentially it&#x27;s turning :<p><pre><code> LOAD DISPATCH OP1 DISPATCH OP2 ... (once per operation in the expression) STORE ... (once per row) </code></pre> into<p><pre><code> DISPATCH LOAD OP1 STORE LOAD OP1 STORE ... (once per row) DISPATCH ... (once per operation in the expression) </code></pre> The nice trade-off here is that you don&#x27;t require code generation to do that, but it&#x27;s still not optimal.<p>If you can generate code it&#x27;s even better to fuse the operations, to get something like :<p><pre><code> LOAD OP1 OP2 ... STORE LOAD ... </code></pre> It helps because even though you can tune your batch size to get mostly cached loads and stores, it&#x27;s still not free.<p>For example on Haswell you can only issue 1 store per cycle, so if OP is a single add you&#x27;re leaving up to 3&#x2F;4 of your theoretical ALU throughput on the table.
evrydayhustling超过 6 年前
Pretty impactful work for an intern! One thing I would have liked to see, both from a process and communication standpoint, is leading with some stats on how much faster the vectorized loops are in isolation from the surrounding engine.<p>It&#x27;s always good practice to dig into a deep project like this with some napkin estimates of how much you stand to gain, and how much overhead you can afford to spend setting yourself up for the faster computation. (Not to mention how much of your own time is merited!)
评论 #19049074 未加载
nimish超过 6 年前
Awesome work. Really cool!<p>I have a hobby project to write an analytics DB that uses ISPC for vectorized execution. Currently not much (sums are real easy) but I really wonder if it could reduce the effort to vectorize these sorts of things.
评论 #19051091 未加载
blr246超过 6 年前
Great write-up. Is the long-term vision to go completely to the vectorised query execution model, or are there cases where a row-oriented plan might be better, such as cases when there are complex computations involving multiple columns of a single row?
评论 #19055401 未加载
sAbakumoff超过 6 年前
I looked at the title and thought that the article is about rolling joints with hash by using some advanced robotic technology and software. Damn!
评论 #19053661 未加载
andonisus超过 6 年前
It was a pleasant surprise to see the code base written in Go!
jnordwick超过 6 年前
Column oriented dbs have been doing parallel (column per thread) joins for a while now, no? And I know they have been leaning heavily on vectorization for over a decade.
JanecekPetr超过 6 年前
Genetic specialization (List&lt;int&gt;) is coming. See <a href="https:&#x2F;&#x2F;openjdk.java.net&#x2F;projects&#x2F;valhalla&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openjdk.java.net&#x2F;projects&#x2F;valhalla&#x2F;</a>. Not this year (value types will be first). But they&#x27;re working in it really hard.