Big in 2020 as a Show HN (456 points, 80 comments) <a href="https://news.ycombinator.com/item?id=23283880">https://news.ycombinator.com/item?id=23283880</a>
The Koka language uses a similar approach to track resource usage, except there they use ref counting and just remove unnecessary ref counting operations. Neat stuff.
From glancing through a few of the pages that piqued my interest, I was somewhat surprised to see this section in "How to Execute Types" (<a href="https://vekatze.github.io/neut/how-to-execute-types.html" rel="nofollow">https://vekatze.github.io/neut/how-to-execute-types.html</a>):<p>> Here, we'll see how a type is translated into a function that discards/copies the terms of the type. To see the basic idea, let's take a simple ADT for example:<p><pre><code> data item {
| New(int, int)
}
</code></pre>
> The internal representation of New(10, 20) is something like the below:<p>> New(10, 20)<p><pre><code> // ↓ (compile)
let v = malloc({2-words}) in
store(10, v[0]);
store(20, v[1]);
v
</code></pre>
I suspected that it's not actually heap-allocating every single bit of memory in every program, and from looking around more in the docs, I _think_ the "Allocation Canceling" section here explains what I was missing (<a href="https://vekatze.github.io/neut/basis.html#allocation-canceling" rel="nofollow">https://vekatze.github.io/neut/basis.html#allocation-canceli...</a>):<p>> When a free is required, Neut looks for a malloc that is the same size and optimizes away such a pair if one exists.<p>This is a really interesting way of automating memory management at compile time. I imagine there's still a lot of room for different choices in this strategy (e.g. choosing to reuse part of a larger allocation rather than looking for one that's exactly the same size and then leaving behind the remainder to re-use for a future allocation), and I'm super curious about whether this would end up encouraging different patterns than existing memory management systems. Offhand, it almost seems like it could act as a built-in allocation buffer managed by the compiler, and I'm curious if the algorithm for reusing memory is smart enough to handle something like manually allocating the maximum amount of memory needed for the lifetime of the program up front and then re-using that for the duration of the program to avoid needing to allocate anything dynamically at all (although my worry would be that this would devolve into the knapsack problem and not be feasible in practice). If this did work though, my immediate idea would be for some sort of hook where you could specify the maximum amount of memory you'd be willing to use, which could then turn "using too much memory at runtime" into a compiler error. My assumption is that that I'm missing something that would make all of this not work the way I'm thinking though.
Could someone explain the “Necessity and noema” section [1] or share a reference? Looked like it might be significant but I couldn’t make much sense of it<p>[1] <a href="https://vekatze.github.io/neut/terms.html#necessity-and-noema" rel="nofollow">https://vekatze.github.io/neut/terms.html#necessity-and-noem...</a>
It looks partly like OCaml, with the "let ... in" kind of syntax. Also the "unit" word. I think in OCaml it means a function that doesn't return any value, but why is the word unit used for that?
For "How Fast is This?" it links to a benchmarks page, which only shows that it's faster than Haskell. It would be more informative to instead compare against a language that is more popular and/or more performant than Haskell.
I'm currently reading through the automatic memory management claims which look really cool (reminds me of linear types), but the highlighted punctuation (, : =) makes it very painful to read.