I take it these "Interaction nets" won't be compilable from just any old python code? I can only imagine there are still some pretty heavy constraints around parallelism. Is the idea here that your python script would be SIMD?
Very nice! I recall reading that the HVM has some semantic differences with the classic lambda-calculus, which resulted in different results for some programs. Would this be an issue when e.g. translating Haskell programs? (and/or is there some way to 'emulate' lambda calculus semantics?)<p>Also, are there any plans to get this working on non-Nvidia GPUs, e.g. with ROCm/HIP (which would hopefully be a straightforward translation, if AMD's software does its job) or OpenCL (more effort, but more portable)?
I have been very interested in the HVM since the first time it was posted on Github. One thing that has been sorely missing is a front-end for any realworld language. This will allow real comparisons of programs instead of tiny toys rewritten from Haskell to interactions nets.
Haskell is a complex language that would be very hard to target. What do you think of targeting the Haskell Core, a high-level intermediate representation that GHC uses? It is much simpler than Haskell itself and would preserve the opportunities for parallelization.
This always seemed like a very interesting project; we need to get to the point where, if things can run in parallel, they must run in parallel to make software more efficient on modern cpu/gpu.<p>It won't attract funds, I guess, but it would be far more trivial to make this work with an APL or a Lisp/Scheme. There already is great research for APL[0] and looking at the syntax of HVM-core it seems it is rather easy to knock up a CL DSL. If only there were more hours in a day.<p>[0] <a href="https://github.com/Co-dfns/Co-dfns">https://github.com/Co-dfns/Co-dfns</a>
So, could this be used to compile an OS, so it would mostly run on the GPU, and also say, python itself, perhaps V8, an an x86 emulator, stuff like that?
I wish people would stop saying GPUs when they mean Nvidia GPUs. It’s like saying you’ve made a product for cars, but then it turns out it’s only for Mercedes cars.