HVM is the ultimate conclusion to years of optimal evaluation research. I've been a great enthusiast of optimal runtimes. Until now, though, my most efficient implementation had barely passed 50 million rewrites per second. It did beat GHC in cases where optimality helped, like λ-encoded arithmetic, but in more real-world scenarios, it was still far behind. Thanks to a recent memory layout breakthrough, though, we managed to reach a peak performance of *2.5 billion* rewrites per second, on the same machine. That's a ridiculous 50x improvement.<p>That is enough for it to enjoy roughly the same performance of GHC in normal programs, and even outperform it when automatic parallelism kicks in, if that counts! Of course, there are still cases where it will perform worse (by 2x at most usually, but remember it is a 1-month prototype versus the largest functional compiler in the world). I'm confident HVM's current design is able to scale and become the fastest functional runtime in the world, because I believe the optimal algorithm is inherently superior.<p>I'm looking for partners! Check the notes at the end of the repository's README if you want to get involved.
If you read the how.md it mentions the key insight was to introduce an operation into the language that allowed temporary violation of 'sensical rules' that the author calls superposition. Then computing or applying one of these super imposed clones collapses down to a final result obeying the rules. Being that the lambda calculus is core to computing information, I can't help but think, what if this is actually an insight into something profound about how our universe works?