Oh nice, the results are pretty good!<p>> On a set of kernel-intensive benchmarks (including NGINX and Redis) the fraction of kernel CPU time Biscuit spends on HLL features (primarily garbage collection and thread stack expansion checks) ranges up to 13%. The longest single GC-related pause suffered by NGINX was 115 microseconds; the longest observed sum of GC delays to a complete NGINX client request was 600 microseconds. In experiments comparing nearly identical system call, page fault, and context switch code paths written in Go and C, the Go version was 5% to 15% slower.<p>10% slowdown in return for memory safety could be a worthwhile tradeoff in some cases. And GC pauses were almost not an issue (less than 1ms in the worst measured).
Not to steal the thread, but for the Go sceptics in system programming, F-Secure decided also to prove them wrong and is shipping bare metal Go for their security solutions.<p><a href="https://labs.f-secure.com/blog/tamago/" rel="nofollow">https://labs.f-secure.com/blog/tamago/</a><p><a href="https://www.f-secure.com/en/consulting/foundry" rel="nofollow">https://www.f-secure.com/en/consulting/foundry</a>
Note that this isn't unmodified Go; it's Go with some features for memory management, in particular having to annotate loops with the number of trips (see section 6.3).
Note that in addition to the CPU overheads, the paper notes that Go's heap requires a factor of 2 to 3 of headroom to run efficiently:<p><pre><code> > A potential problem with garbage collection is that it
> consumes a fraction of CPU time proportional to the
> “headroom ratio” between the amount of live data and
> the amount of RAM allocated to the heap. This section
> explores the effect of headroom on collection cost.
>
> [...]
>
> In summary, while the benchmarks in §8.4 / Figure 7
> incur modest collection costs, a kernel heap with millions of live
> objects but limited heap RAM might spend
> a significant fraction of its time collecting. We expect
> that decisions about how much RAM to buy for busy
> machines would include a small multiple (2 or 3) of the
> expected peak kernel heap live data size.
>
> [...]
>
> If CPU performance is paramount,
> then C is the right answer, since it is faster (§8.4, §8.5).
> If efficient memory use is vital, then C is also the right
> answer: Go’s garbage collector needs a factor of 2 to 3 of
> heap headroom to run efficiently (see §8.6).</code></pre>
See also, for something surprisingly and alarmingly close to this, gvisor:<p><a href="https://github.com/google/gvisor" rel="nofollow">https://github.com/google/gvisor</a>
But the development seems stopped. The latest commit was on Jun 8, 2019.<p>At the time this project showed up, I gave it a try on QEMU. It was cool and I was like "I must join this!" As the result, except for one minor fix, I was not able to contribute more because the development environment was not comfortable. The patched go tree was not easy to follow. It also looks impossible for others to just rebase to later go.<p>My take on this is that, although go provides many OS-like features, you still have to draw a clear line between the OS and the language it uses if you want it to be maintainable and evolvable. I am still obsessed to the idea of making an OS in go and somehow trying doing one in a very slow pace.
This is interesting in that if you can use channels as described by the CSP book[1] you could build a kernel that is guaranteed to be free of concurrency bugs.<p>This would be important because even if you have proven the functional correctness of a kernel, that typically excludes the concurrency aspect.<p>[1] (<a href="https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf" rel="nofollow">https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf</a>)
Reminds me of when I was excited about <a href="https://web.archive.org/web/20120104065532/http://web.cecs.pdx.edu/~kennyg/house/" rel="nofollow">https://web.archive.org/web/20120104065532/http://web.cecs.p...</a> for a high level language kernel.