A lot of the stuff they’re doing with wasm is confusing to me. Compiling an interpreter to wasm to run a script in the browser seems, IDK, a bit much.<p>I’m not seeing how using the first Futamura projection (targeting wasm) wouldn’t both result in a significantly smaller module size and give more opportunity to optimize/specialize the code as it could perform offline analysis. One could even use the second Futamura projection along with the dynamic code loading from TFA and get an online jit compiler virtually for free.<p>If I was getting paid megabucks to do these types of things that’s where I’d spend my research time.
This is a nice overview on how to achieve just-in-time compilation in Wasm, and the demo is pretty cool. Good work!<p>We use similar techniques to power Webvm[1], an X86 Virtual Machine that runs linux programs in the browser.<p>A proper Wasm JIT API in JavaScript would be even better of course, but as the article says, cool things are already possible right now.<p>I expect to see more projects doing Wasm just-in-time compilation in the future (I believe that V86[2] also already does it)<p>[1]: <a href="https://webvm.io/" rel="nofollow">https://webvm.io/</a><p>[2]: <a href="https://github.com/copy/v86" rel="nofollow">https://github.com/copy/v86</a>
I experimented with this for native x86 many years ago ([link redacted]). I used it to generate BitBlt functions with no conditionals in the hot paths, which created noticeable performance improvements with no compromise in flexibility. Debugging code like that is painful though!
I did implement a math parser a couple of years that did JIT wasm code generation. It operated on 1d array, providing kind of numpy syntax. It achieved a very decent performance. It was quite some fun.