TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The missing tier for query compilers

88 点作者 jamii3 个月前

6 条评论

t0b13 个月前
The author striked out the part about CedarDB not being available -- which is true -- but Umbra is available as a docker container[1] for some time now. The &quot;Umbra paper&quot; linked also contains an artifact[2] with a copy of Umbra as well as some instructions on how to control the back-ends used etc. (Cranelift is not available as an option in the docker versions however)<p>I kind of disagree with the assumption that baseline compilers are easy to build (depending on your definition of baseline). A back-end like DirectEmit is not easy to write and even harder to verify. If you have a back-end written for your own IR you will likely have to write tests in that IR and it will probably be quite hard to simply port over codegen (or run-) tests from other compilers. Especially in the context of databases it is not very reassuring if you have a back-end that may explode the second you start to generate code slightly differently. We&#x27;re working on making this a bit more commoditized but in our opinion you will always need to do some work since having another IR (with defined semantics someone could write a code generator for you) for a back-end is very expensive. In Umbra, translating Umbra-IR to LLVM-IR takes more time than compiling to machine code with DirectEmit.<p>Also, if it is easy to write them, I would expect to see more people write them.<p>Copy-and-patch was also tried in the context of MLIR[3] but the exec-time results were not that convincing and I have been told that it is unlikely for register allocation to work sufficiently well to make a difference.<p>[1]: <a href="https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;umbradb&#x2F;umbra" rel="nofollow">https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;umbradb&#x2F;umbra</a><p>[2]: <a href="https:&#x2F;&#x2F;zenodo.org&#x2F;records&#x2F;10357363" rel="nofollow">https:&#x2F;&#x2F;zenodo.org&#x2F;records&#x2F;10357363</a><p>[3]: <a href="https:&#x2F;&#x2F;home.cit.tum.de&#x2F;~engelke&#x2F;pubs&#x2F;2403-cc.pdf" rel="nofollow">https:&#x2F;&#x2F;home.cit.tum.de&#x2F;~engelke&#x2F;pubs&#x2F;2403-cc.pdf</a>
评论 #43030598 未加载
jamii3 个月前
The tradeoff in SingleStore is interesting. By default and unlike eg postgres, parameterized queries are planned ignoring the values of the parameters. This allows caching the compiled query but prevents adapting the query plan - for the example in the post SingleStore would pick one plan for both queries.<p>But you can opt out of this behaviour by wrapping each parameter in NOPARAM (<a href="https:&#x2F;&#x2F;docs.singlestore.com&#x2F;cloud&#x2F;reference&#x2F;sql-reference&#x2F;code-generation-functions&#x2F;noparam&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.singlestore.com&#x2F;cloud&#x2F;reference&#x2F;sql-reference&#x2F;c...</a>).
评论 #43027051 未加载
foobazgt3 个月前
The author pans meta-tracing and shows a graph of TruffleRuby having really crazy runtime behavior. However, it looks extremely suspicious - like the kind of thing you&#x27;d see if there was a memory leak and the garbage collector was running constantly.<p>Most people seem really excited about the results coming out of Graal and TruffleRuby seems to have some really impressive results in general, so the graph is surprising. It&#x27;s also missing a bunch of details, so it&#x27;s hard to speak to its voracity - for example, what are the different versions of the runtimes, what flags were used, on what hardware?<p>As a counter example, there&#x27;s a different (admittedly old) result where TruffleRuby beats cruby on the same benchmark by 4.38x. <a href="https:&#x2F;&#x2F;eregon.me&#x2F;blog&#x2F;2022&#x2F;01&#x2F;06&#x2F;benchmarking-cruby-mjit-yjit-jruby-truffleruby.html" rel="nofollow">https:&#x2F;&#x2F;eregon.me&#x2F;blog&#x2F;2022&#x2F;01&#x2F;06&#x2F;benchmarking-cruby-mjit-yj...</a>
评论 #43033439 未加载
zX41ZdbW3 个月前
This is an interesting article!<p>ClickHouse does a hybrid approach. It uses a vectorized interpreter plus the JIT compilation (with LLVM) of small expressions (which is also vectorized and generated for the target instruction set by compiling loops). The JIT compilation does not bother to compile the whole query, only simple fragments. It gives only a marginal improvement over the vectorized interpreter.<p>Even with this approach, JIT is a huge pile of problems. A few details here: <a href="https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;clickhouse-just-in-time-compiler-jit" rel="nofollow">https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;clickhouse-just-in-time-compiler...</a><p>Overall architecture overview: <a href="https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;first-clickhouse-research-paper-vldb-lightning-fast-analytics-for-everyone" rel="nofollow">https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;first-clickhouse-research-paper-...</a>
convolvatron3 个月前
Clustrix wrote a compiler. It was able to call out to C for string intrinsics and communication and btree operations, so it primarily handled extracting data from serialized formats, simple arithmetic, constructing serializations from downstream.<p>this was to replace a little byte code VM, and as far I recall took a very good programmer about 2 weeks for the basic version.<p>I dont see any reason to even try to bring something like a general purpose JIT. there&#x27;s a huge benefit from just doing the basics, and you can move the line between precompiled functions and dynamically compiled ones on an ongoing basis. its also cheap enough that I wouldn&#x27;t get hung up on needing to amortize that cost with prepared statements.
UncleEntity3 个月前
The real original copy and patch paper [0] used gcc&#x27;s addressable gotos to get the addresses of the &quot;patches&quot; and pointers for the operands which can be replaced at jit-time as opposed to the referenced paper which uses llmv tomfoolery to create a database of code patches as part of the building process.<p>IIRC, there may be another &quot;original&quot; paper.<p>To help with the author&#x27;s confusion, they use function arguments to manage the registers. If you have a simple operand like &#x27;add&#x27; you pass the left and right arguments as function arguments but if you have some values you want to keep in registers you pass those as additional arguments to the functions and just pass them through to the continuation hoping the compiler does the right thing (i.e. not spill the arguments during the function call).<p>Of course this leads to a combinatorial explosion of functions as you need one specialized for every register you want to pass through:<p><pre><code> void add0(int l, int r, kont c); void add1(int l, int r, int r0, kont c); void add2(int l, int r, int r0, int r1, kont c); </code></pre> and also need to track in your jitifier who&#x27;s doing what and how many intermediate results are needed at which program point &amp;etc.<p>I believe both ways share the same method of generating a shitton of functions as a linked library and using that to create a database of code patches at compile time which the runtime uses to patch it all together. One does it using llvm tooling and the other uses &#x27;extern int <i>add_l, </i>add_r, *add_r0&#x27; to get the locations of the operands to replace at runtime with the the actual value because C doesn&#x27;t care one bit what you do with pointers.<p>I&#x27;m probably wrong on some of the details but that&#x27;s the basics of what&#x27;s happening.<p>__edit__ Reading the part on how extern variables are used and the example function definitions doesn&#x27;t match perfectly with what is actually happening but the concept is the same -- use the locations the pointers point to to find the places to replace with actual values at runtime. The function defs are more like what a &#x27;musttail interpreter&#x27; would use. Not sure this &#x27;correction&#x27; is any clearer...<p>[0] <a href="http:&#x2F;&#x2F;ieeexplore.ieee.org&#x2F;document&#x2F;1342540&#x2F;" rel="nofollow">http:&#x2F;&#x2F;ieeexplore.ieee.org&#x2F;document&#x2F;1342540&#x2F;</a>