I'm not sure the timeline described ("only at the last minute did it switch to stack-based encoding for the operators") is accurate, but it is the case that for a while we were working towards more of a register-oriented encoding instead of the stack oriented one that shipped. The representation of trees and operands was also different. I think what ultimately shipped was probably right, but the semantics described by the article for blocks are incredibly gross and if I had known about them I would've blocked them. The author's conclusion that this is due to wasm's asm.js-derived heritage is accurate (also, arguably the 'lots of locals' model was unavoidable since everyone was compiling wasm using JS runtimes anyway.)<p>Incidentally this claim is false: "No streaming compiler had yet been built, hell, no compiler had yet been built." Early in development we had at least two different compilers used to generate test cases - one compiler for a home-grown imperative language written by Nick Bray, and another compiler for a subset of C# that I wrote [1]. Having those two compilers generating code early on was useful given that neither emscripten or LLVM were capable of compiling real apps so we were flying blind without them. Development of LLVM integration also started <i>very</i> early, the problem is just that it took a long time until it was usable.<p>As for whether the lessons from those compilers were actually paid attention to or acted upon, well...<p>P.S. I still don't understand the reasoning behind "blocks have return values". Does any popular programming language out there do this except maybe some of the ML-derived ones? I've never run into it in production software. It's certainly not something a typical compiler would generate unless the source language had it as a primitive.<p>1: <a href="https://github.com/kg/ilwasm/blob/master/third_party/tests/Raytracer.cs" rel="nofollow">https://github.com/kg/ilwasm/blob/master/third_party/tests/R...</a>