I suspect that GC'd languages could mitigate this problem by introducing regions; separate areas of memory that cannot point at each other. Pony actors [0] have them, and Cone [1] and Vale [2] are trying new things with them.<p>If golang had this, then it might not ever need to run its GC because it could just fire up a new region for every request. The request will likely end and blast away its memory before it needs to collect, or it could choose to collect only when that particular goroutine/region is blocked.<p>Extra benefit: if there's an error in one region, we can blast it away and the rest of the program continues!<p>[0] <a href="https://tutorial.ponylang.io/types/actors.html#concurrent" rel="nofollow">https://tutorial.ponylang.io/types/actors.html#concurrent</a><p>[1] <a href="https://cone.jondgoodwin.com/fast.html" rel="nofollow">https://cone.jondgoodwin.com/fast.html</a><p>[2] <a href="https://verdagon.dev/blog/seamless-fearless-structured-concurrency" rel="nofollow">https://verdagon.dev/blog/seamless-fearless-structured-concu...</a>
Not discrediting Rust, but I've noticed you rarely hear "we improved performance" by rewriting our implementation using the same language... Although this, too, can yield similar performance improvements.
How illuminating. From CloudFlare posts, I had been under the impression that Go's gc was incredibly unintrusive, near-real time performance for applications operating in increments of a few hundred milliseconds. For example, CloudFlare uses Go to analyze network traffic.<p>Yes, Rust provides a more predictable, faster memory management model than Go. At the expense of unpredictable, expensive memory leaks triggering application termination.<p>Curious how much time and effort was dedicated to improving gc, which is a useful endeavor in its own right.
Or they just got bored and wanted to try some shinier toy. I've seen this happen dozens of time, all the bullshit for justifying it is just that, bullshit.<p>Not saying this is the case here but highly likely.