Fascinating things I learned from this article:<p>1. The benchmarks game is mostly a game of who can solve the problem fastest with the constraint of "all the code has to be in this language." It's not about how anyone would realistically write the code; if ludicrous optimizations were in-scope for any real-world project, so would be <i>calling out to a different language</i>. It's worse than the usual problem of benchmarks being unrepresentative of real code; the implementations are also unrepresentative.<p>2. Apparently performant Haskell involves unchecked raw memory access? What's the story there?
I've been working with rust for about a week now and I've found it a mirror of go in several ways.<p>Go aims to be simple with the aim of being 'easy' to start writing idiomatically within a few days of jumping into the language. Comparatively, rust is a behemoth in terms of complexity.<p>There's the completely relevant compilation speed difference: if rust could compile within 5-10x the time go does, it would be much more pleasant to work with. Being able to compile within 2 seconds on large projects is crazy for iteration speed; having to wait upwards of 10 seconds on toy projects is not.<p>However, rust is stupid fast while also being safer than go. Also, generics; that's a flamewar for another day.<p>Context: I've been working with go for around 2-3 years now. I recently decided to pick up rust because of the guarantees it provides along with the crazy performance.
The author also points out that some of the benchmarks poorly represent real workloads:<p>"Bottom up (since the worst offenders are now first),<p>binary-trees is silly since it measures allocation speed for a case that simply doesn't exist in real code;<p>thread-ring is basically insane, since nobody ever bottlenecks like that;<p>chameneos-redux's C++ implementation is ridiculous. The C is not so ridiculous, but you still have the problem that basically every language in the top few spots does something completely different;<p>pidigits tests whether you have bindings to GMP;<p>regex-dna tests a regex engine on a small subset of cases (arguably the first half-acceptable benchmark);<p>k-nucleotide tests who has the best hash table for this particular silly scheme, and they don't all even do the same thing (eg. Scala precompacts, like my new Rust version);<p>mandelbrot is kind'a OK;<p>reverse-complement would be kind'a OK if not for a few hacky implementations (like the Rust);<p>spectral-norm is kind'a OK;<p>Haskell basically cheats fasta (which is why I copied it);<p>meteor-contest is too short to mean anything at all;<p>fannkuch-redux is probably kind'a OK,<p>n-body is kind'a OK.<p>So maybe 5/13 are acceptable, and I'd still only use 4 of those. I think if looking at mandelbrot, spectral-norm, fannkuch-redux and n-body you can argue the benches are a reasonable measure of peak performance. However, these cases are also all too small and simple to really be convincing either, nor is it particularly fair (where's NumPy for Python?)."<p><a href="https://users.rust-lang.org/t/blog-rust-faster/3117/12?u=acconsta" rel="nofollow">https://users.rust-lang.org/t/blog-rust-faster/3117/12?u=acc...</a>
Every time I look at rust and posts about rust like this one. It occurs to me that "I can use this to write 'C' libraries" and a variable number of moments later, I think to myself that I'll keep Go at the bottom of my list of languages to learn to program in, after Ruby, Lisp, Fortran, COBOL, and Intercal, I'll get to Go one day, if it's still relevant to me, for now Rust + Erlang / Elixir on FreeBSD is like having my own personal unicorn.<p>I've been meaning to work on a proper "Erlang/OTP-ish" framework for Python for a long time, Pulsar[1] is a good start but needs more developers, and more documentation in order to grow.
It has an example that does web sockets with normal Django, no massive hacks. (Which someone more familiar with Django should really check out and talk about more widely) But no Flask example?<p>1 - <a href="https://github.com/quantmind/pulsar" rel="nofollow">https://github.com/quantmind/pulsar</a>
Very cool, and definitely gives me some insight into those benchmarks. Which makes me wonder -- are there benchmarks for "boring" programs in a variety of languages? I'm generally more interested in the execution speed of implementations that I would actually have time to write when on a deadline.
"<i>k_nucleotide is a test of how fast your Hash map is.</i>"<p>Speaking of Rusts' HashMap, the Robin Hood map is pretty darn sweet (and I say that as someone who translated a Pythonish map to Rust), but the last time I looked it was still throttled by SipHash.[1] Is there any progress on SIMD-ing that?<p>[1] "Safety vs. performance", etc., etc.