TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Comparing the C FFI overhead on various languages

124 点作者 generichuman大约 3 年前

24 条评论

kllrnohj大约 3 年前
Another major caveat to this benchmark is it doesn't include any significant marshalling costs. For example, passing strings or arrays from Java to C is much, much slower than passing a single integer. Same is going to be true for a lot (all?) of the GC'd languages, and especially true for strings when the language isn't utf8 natively (as in, even though Java can store in utf8 internally, it doesn't expose that publicly so JNI doesn't benefit)
评论 #31381472 未加载
评论 #31380510 未加载
haberman大约 3 年前
Some of the results look outdated. The Dart results look bad (25x slower than C), but looking at the code (<a href="https:&#x2F;&#x2F;github.com&#x2F;dyu&#x2F;ffi-overhead&#x2F;tree&#x2F;master&#x2F;dart" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dyu&#x2F;ffi-overhead&#x2F;tree&#x2F;master&#x2F;dart</a>) it appears to be five years old. Dart has a new FFI as of Dart 2.5 (2019): <a href="https:&#x2F;&#x2F;medium.com&#x2F;dartlang&#x2F;announcing-dart-2-5-super-charged-development-328822024970" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;dartlang&#x2F;announcing-dart-2-5-super-charge...</a> I&#x27;m curious how the new FFI would fare in these benchmarks.
评论 #31381130 未加载
评论 #31380414 未加载
评论 #31382531 未加载
tomas789大约 3 年前
There is no Python benchmark but you can find a PR claiming it has 123,198ms. That would be a worst one by a wide margin.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;dyu&#x2F;ffi-overhead&#x2F;pull&#x2F;18" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dyu&#x2F;ffi-overhead&#x2F;pull&#x2F;18</a>
评论 #31381452 未加载
评论 #31378847 未加载
评论 #31385862 未加载
评论 #31378781 未加载
throw827474737大约 3 年前
So why isn&#x27;t C the baseline (and zig and rust being pretty close to it quite expected), but both luajit and julia are significantly faster??
评论 #31377479 未加载
评论 #31377564 未加载
评论 #31377490 未加载
WalterBright大约 3 年前
The D programming language has literally a zero overhead to interface with C. The same calling conventions are used, the types are the same.<p>D can also access C code by simply importing a .c file:<p><pre><code> import foo; &#x2F;&#x2F; call functions from foo.c </code></pre> analogously to how you can `#include &quot;foo.h&quot;` in C++.
dgan大约 3 年前
I had to run it to believe, I confirm it&#x27;s 183 seconds(!) for python3 on my laptop<p>Also, OCaml because I was interested (milliseconds):<p><pre><code> ocaml(int,noalloc,native) = 2022 ocaml(int,alloc,native) = 2344 ocaml(int,untagged,native) = 1912 ocaml(int32,noalloc,native) = 1049 ocaml(int32,alloc,native) = 1556 ocaml(int32,boxed,native) = 7544</code></pre>
TazeTSchnitzel大约 3 年前
It seems Rust has basically no overhead versus C, but it could have <i>negative</i> overhead if you use cross-language LTO. Of course, you can do LTO between C files too, so that would be unfair. But I think this sets it apart from languages that, even with a highly optimised FFI, don&#x27;t have compiler support for LTO with C code.
评论 #31383946 未加载
评论 #31379830 未加载
评论 #31381850 未加载
cube2222大约 3 年前
Just a caveat, not sure if it matters in practice, but this benchmark is using very old versions of many languages it&#x27;s comparing (5 year old ones).
评论 #31377768 未加载
exebook大约 3 年前
I developed a terminal emulator, file manager and text editor Deodar 8 years ago in JavaScript&#x2F;V8 with native C++ calls, it worked but I was extremely disappointed by speed, it felt so slow like you need to do a passport control each time you call a C++ function.
评论 #31378205 未加载
评论 #31377977 未加载
评论 #31377707 未加载
ryukoposting大约 3 年前
This ia a cool concept, but the implementation is contrived (as many others describe). e.g. JNI array marshalling&#x2F;unmarshalling has a lot of overhead. The Nim version is super outdated too (not sure about the other languages).
sk0g大约 3 年前
For a game scripting language, Wren posts a pretty bad result here. Think it has isn&#x27;t explicitly game focused though. The version tested is quite old however, having released in 2016.
mhh__大约 3 年前
Needs LTO, with that it will have 0 overhead in the compiled languages.<p>D can actually compile the C code in this test now.
khoobid_shoma大约 3 年前
I guess it is better to measure CPU time instead of wall time (e.g. using clock() ).
KSPAtlas大约 3 年前
What about Common Lisp?
评论 #31378221 未加载
planetis大约 3 年前
That Nim version has just left kindergarten and is prepping for elementary.
dunefox大约 3 年前
&gt; - julia 0.6.3<p>That&#x27;s an ancient version, the current version is v1.7.2.
评论 #31379160 未加载
SemanticStrengh大约 3 年前
Java has a new API for FFI called the foreign memory interface
alkonaut大约 3 年前
Any idea why mono is used rather than .NET here?
评论 #31378754 未加载
bfrog大约 3 年前
Go looks horribly slow, I thought segmented stacks have gone away to improve this?
ta988大约 3 年前
Java has project Panama coming that may improve things a little.
thot_experiment大约 3 年前
n.b. this is using an absolutely ancient version of node, though i&#x27;m not sure that would change anything, worth nothing
sdze大约 3 年前
Can you try PHP?
ksec大约 3 年前
Missing 2014, or 2018 in the title.
SolitudeSF大约 3 年前
this benchmark is afwul (which is expected)