I really like this quote from the manual:<p><<There are a class of "ideal attractors" in engineering, concepts like "everything is an object," "homoiconicity," "purely functional," "pure capability system," etc. Engineers fall into orbit around these ideas quite easily. Systems that follow these principles often get useful properties out of the deal.<p>However, going too far in any of these directions is also a great way to find a deep reservoir of unsolved problems, which is part of why these are popular directions in academia.<p>In the interest of shipping, we are consciously steering around unsolved problems, even when it means we lose some attractive features.>>
We are gettig an increasing amount of interesting Rust operating system for different uses.<p>- Hubris for deep embedded<p>- Redox OS for Desktop/Server (<a href="https://www.redox-os.org/" rel="nofollow">https://www.redox-os.org/</a>)<p>- Tock for embedded (<a href="https://www.tockos.org/" rel="nofollow">https://www.tockos.org/</a>)<p>- Xous for trusted devices (<a href="https://xobs.io/announcing-xous-the-betrusted-operating-system/" rel="nofollow">https://xobs.io/announcing-xous-the-betrusted-operating-syst...</a>)<p>I assume there are more.
I have an embedded real-time control project that is currently written in Rust, but runs with RTIC (<a href="https://rtic.rs/" rel="nofollow">https://rtic.rs/</a>), a framework which is conceptually similar (no dynamic allocation of tasks or resources) but also has some differences. RTIC is more of a framework for locks and critical sections in an interrupt based program than a full fledged RTOS. Looking through the docs, here's the main differences (for my purposes) I see:<p>1. In Hubris, all interrupt handlers dispatch to a software task. In RTIC, you can dispatch to a software task, but you can also run the code directly in the interrupt handler. RTIC is reliant on Cortex-M's NVIC for preemption, whereas Hubris can preempt in software (assuming it is implemented). This does increase the minimum effective interrupt latency in Hubris, and if not very carefully implemented, the jitter also.<p>2. Hubris compiles each task separately and then pastes the binaries together, presumably with a fancy linker script. RTIC can have everything in one source file and builds everything into one LTO'd blob. I see the Hubris method as mostly a downside (unless you want to integrate binary blobs, for example), but it might have been needed for:<p>3. Hubris supports Cortex-M memory protection regions. This is pretty neat and something that is mostly out of scope for RTIC (being built around primitives that allow shared memory, trying to map into the very limited number of MPU regions would be difficult at best). Of course, it's Rust, so in theory you wouldn't need the MPU protections, but if you have to run any sort of untrusted code this is definitely the winner.<p>Hubris does support shared memory via leases, but I'm not sure how it manages to map them into the very limited 8 Cortex-M MPU regions. I'm quite interested to look at the implementation when the source code is released.<p>Edit: I forgot to mention the biggest difference, which is that because tasks have separate stacks in Hubris, you can do blocking waits. RTIC may support async in the future but for now you must manually construct state machines.
> instead of having an operating system that knows how to dynamically create tasks at run-time (itself a hallmark of multiprogrammed, general purpose systems), Cliff had designed Hubris to fully specify the tasks for a particular application at build time, with the build system then combining the kernel with the selected tasks to yield a single (attestable!) image.<p>I worked briefly at John Deere, and their home-grown operating system (called "JDOS", written in C) also baked every application into the system at compile time. This was my only embedded experience, but I assumed this was somewhat common for embedded operating systems?
So this is what Cantrill has been talking about.<p><a href="https://www.youtube.com/watch?v=XbBzSSvT_P0" rel="nofollow">https://www.youtube.com/watch?v=XbBzSSvT_P0</a><p><a href="https://www.youtube.com/watch?v=cuvp-e4ztC0" rel="nofollow">https://www.youtube.com/watch?v=cuvp-e4ztC0</a>
If anyone else wondered about the term BMC: <a href="https://www.servethehome.com/explaining-the-baseboard-management-controller-or-bmc-in-servers/" rel="nofollow">https://www.servethehome.com/explaining-the-baseboard-manage...</a>
I'd like to hear more about Oxide's development process. Was this designed on an index card, and then implemented? Or was it done with piles and piles of diagrams and documents before the first code was committed? Was it treated as a cool, out-there idea that's worth exploring, and then it gradually looked better and better?<p>It's hard to get software organizations to do ambitious things like this, and it's impressive that this was done on a relatively short timescale. I think the industry could learn a lot from how this was managed.
Intersting choices of names, Hubris and Humility. Combined with the style of the page, it gives to me a solemn and heavy feeling. Especially compared to most projects presented that tend to be very "positive energy and emojis". Their website is also beautiful <a href="https://oxide.computer/" rel="nofollow">https://oxide.computer/</a>. Though I wonder who's the target for this. Is this for cloud provider themselves, for people that self host, for hosters? For everyone?
Has Oxide released any information on the price range of one of their machines? I assume if they're targeting mid-size enterprises it would be outside what I would consider buying for hobby use, but it would be sweet in the future if there was a mini-Oxide suitable for home labs.
The supervisor model reminds me a bit of how BEAM (Erlang/Elixir) works although I'm sure that's probably where the similarities end.<p>As much as most of this is way over my head, I'm always fascinated to read about new ground-up work like this.
> no C code in the system. This removes, by construction, a lot of the attack surface normally present in similar systems.<p>Not to be too pedantic here, but it's important to note that the absence of C code, while arguably a benefit overall, doesn't by itself guarantee anything with regards to safety/security...I suppose there's going to necessarily be at least some "unsafe" Rust and/or raw assembly instructions sprinkled throughout, but I can't yet see that myself (as of the time of writing this comment, the GitHub links are responding with 404). Nonetheless, it's always refreshing to see some good documentation and source code being provided for these kinds of things. Many companies in this space, even these days, sadly continue to live by some outdated values of hiding behind "security through obscurity", which is somehow championed (though using different words) as a benefit even to their own customers, so it's refreshing that others (Oxide among them) are really starting to take a different approach and making their software/firmware publicly available for inspection by anyone inclined to do so.
As someone who's only worked with a prepared hardware kit (a dsPIC33F on an Explorer 16 that came with cables and the debugging puck), if I want to pick up the board they recommend in the blog post, do I need to make sure I get any other peripherals?<p>This all seems very cool, and I badly want to poke at embedded stuff again, but I have whatever the opposite of a green thumb is for hardware. Advice would be appreciated ^_^
How are these docs being built? I really like how these look and it looks to be asciidoc based, but I can't seem to find a build script for these.
I think reference provide more info than above announcement itself:<p><a href="https://hubris.oxide.computer/reference" rel="nofollow">https://hubris.oxide.computer/reference</a><p>Looks amazing imo. Waiting for github code :D
I'm not familiar w/the details of the Cortex-Ms -- do any of them support SMT/multicore? Does Hubris have a scheduler which can support a multithreaded/core cpu?
when i started working on a recent realtime project i used linux, although i wanted to do bare metal. but that was not an option because of all the drivers necessary, and i knew i wanted to use the GPU and the cortex A processor i am using. i am still wondering if there really no solution to this situation.
Hey folks! The 404s are because we were planning on actually publishing this a bit later today, but it seems like folks noticed the CNAME entry. Happy to talk about it more, though obviously it'll be easier to see details once things are fully open.<p>EDIT: blog post is up: <a href="https://oxide.computer/blog/hubris-and-humility" rel="nofollow">https://oxide.computer/blog/hubris-and-humility</a> and the GitHub should be open.<p>EDIT 2: The HN story now points to this blog post, thanks mods!
Oxide’s work is always interesting and basically a perfect confluence of all of my combined hardware and software experience to date.<p>However, I can’t quite get over their policy of paying everyone the same salary of $175,000. ( <a href="https://news.ycombinator.com/item?id=26348836" rel="nofollow">https://news.ycombinator.com/item?id=26348836</a> ) I’d love to apply and work on these things, but I wouldn’t love the idea of sacrificing $xxx,000 per year for the privilege of building someone else’s startup.<p>Does anyone know if they have some variability in equity compensation at least? I’m no stranger to taking significant compensation in startup equity, but it would have to be significant enough to make up for the significant comp reduction relative to just about every other employer in these domains.