The article didn't list JNode, but it's also a pure Java OS.<p>I noticed in another thread that a few people seem to think you can't implement an entire operating system in a GCd language like Java or C#, but that isn't true. You can do it like this:<p>1. Use an ahead of time compiler to compile the base runtime/kernel image to static machine code. In managed language operating systems like JNode or Singularity there isn't a strong distinction between kernel and language runtime, so this is the same thing. This base needs to contain at least a GC and probably a JITC, as well as some basic support services. This can itself be written in the managed language.<p>2. Write a very short bootstrap loader in assembly like every OS has, which sets things up enough to jump into the entry point of that runtime.<p>3. Writing a compiler in a managed language is easy enough but what about a GC? To do this you teach the compiler that some methods are 'magic' and shouldn't be treated as normal method calls. Instead they become either compiler intrinsics that are translated to pure assembly e.g. to read/write raw memory locations, or they are compiled in special ways for example to remove GC safe points.<p>The current best example of this in action is the GC in SubstrateVM, which is the "VM" compiled into any program AOT compiled with the GraalVM native image tool:<p><a href="https://github.com/oracle/graal/tree/master/substratevm/src/com.oracle.svm.core.genscavenge/src/com/oracle/svm/core/genscavenge" rel="nofollow">https://github.com/oracle/graal/tree/master/substratevm/src/...</a><p>If you flick through it you'll see various annotations and types you wouldn't normally see in Java, like `Pointer` and `@Uninterruptible`. These are recognized by the compiler and affects how the machine code is generated. The language is the same, so all existing tools continue to work - it's not a dialect or subset of Java, it's the full thing, just with slightly modified rules for how the final generated code behaves.<p>SubstrateVM has one more trick up its sleeve to break the circularity: some objects can be initialized and persisted to an "image heap" at build time. In other words, the GC code can use Java classes to structure itself, despite being unable to allocate.<p>And that's all it needs.<p>There have been efforts to do things like this in the past for full operating systems. They have nice properties: for example you can sandbox drivers, IPC overheads goes away because it's all in a single address space, capabilities actually work and are pervasive, and it's quite easy to keep ABIs stable. There are usually a few sticking points that prevent them taking off:<p>1. Historically, GCs have either been good at latency <i>or</i> throughput but not both simultaneously. Some of them also had trouble with large heaps. That's a problem because virtually all reasonable computing requires a mix of programs with wildly different sensitivity, most obviously, developers want latency to be prioritized for editing in their IDE but throughput to be prioritized for their build system. If you have one GCd heap for the entire computer then you have two options:<p>1a. Pick one algorithm to manage it.<p>1b. Do what Singularity did and come up with a quasi-process notion in which each unit has its own GCd heap. Singularity had a nice concept called an 'exchange heap' which allowed objects to be passed between these quasi-processes very fast and cheaply, whilst ensuring that object graph could only be pointed to by one unit at once. This made IPC tremendously cheap, allowed one unit to be paused for GC whilst other units ran, and let them use IPC all over the place. However it did reduce the benefits of using managed languages somewhat as it reintroduced complex data ownership rules.<p>NB: This is changing now with tech like HotSpot ZGC and Shenandoah (which are written in C++ though). They drive latencies through the floor, it's basically pauseless, and the new not yet released fully generational variants have very good throughput too. Also, G1 GC has monstrous throughput even with low pause times, they just aren't quite as low as ZGC/Shenandoah.<p>2. Overheads of managed languages are higher. The successor at MS Research to Singularity was codenamed Midori and not much was ever published about it publicly, but from what was written (by Joe Duffy) it seemed apparent that they went down a rabbithole of trying to make Midori have the same raw efficiency and overhead as C++ based Windows. They got a long way but ended up not having any interesting enough new features to justify the investment and the project was eventually canned.<p>3. All the same problems new operating systems always have: no apps, drivers etc.<p>4. Spectre attacks make single address space operating systems more complex. The new support for memory protection keys in Intel CPUs could re-awaken this area of OS research however because MPKs let you block speculation attacks within a single address space.