The "use cases and opinions" section is pretty nice, and I agree with it.<p>I would expect Rust and Swift to have the same execution performance on Fibonacci. In fact, I'd expect them to generate essentially identical LLVM IR. The fact that they have differences in performance leads me to believe that it's some sort of "LLVM IR optimization didn't trigger because of a bug" sort of issue—perhaps the optimization pass that converts recursive functions to loops.<p>I have to say, though: Beware of Fibonacci as a benchmark. I believe it's vulnerable to a compiler that optimizes it to the closed form solution [1]. I don't think compilers do this optimization today, but if you popularize a Fibonacci benchmark you will create market pressure for them to implement it and ruin your results. :)<p>[1]: <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow">https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_e...</a>
That's a quite small sampling of microbenchmarks, but still interesting.<p>One thing you failed to note is that only Rust and Swift aren't garbage collected. That means that only Rust and Swift should be considered for applications where deterministic performance is required, in other words soft or hard real time.<p>Many games, for instance, have soft real time requirements.
<p><pre><code> Golang: to enable all the cores, you have to put in your code runtime.GOMAXPROCS(num of cores you want to use)
</code></pre>
This is no longer true. With Go 1.5 and higher GOMAXPROCS is set to the number of CPUs available by default.
here is the source, all the tests seem quite the same <a href="https://github.com/grigio/bench-go-rust-swift" rel="nofollow">https://github.com/grigio/bench-go-rust-swift</a>