I assume this is related to this from yesterday
<a href="https://news.ycombinator.com/item?id=23990075" rel="nofollow">https://news.ycombinator.com/item?id=23990075</a>
Which is about revoking secure boot keys
There was a story on HN last night from Debian where they laid out this issue, and basically stated "Yes, this security update is going to render some systems unbootable, here is why we're doing it anyway."<p><a href="https://www.debian.org/security/2020-GRUB-UEFI-SecureBoot/" rel="nofollow">https://www.debian.org/security/2020-GRUB-UEFI-SecureBoot/</a><p>Stability is important, especially when it comes to unbootable machines—but I don't quite know what anyone was supposed to do here. If a user has secure boot enabled, the OS has to assume that the user wants/needs security at that level of the chain—and it is therefor responsible for ensuring the chain's integrity. In this case, there was no way to do that without some machines (temporarily) failing to boot.<p>What would have been a better way to handle this?
I will state the same comment as last time.<p>Can distros maybe consider moving to systemd-boot at some point? Systemd is already built in and can handle things like mounting pretty easily and simply.<p>It is a hell of a lot leaner than grub, doesn't use a billion superfluous modules. That and it is a lot easier to prevent tampering compared with the cumbersome nonsense that is grub passwords.<p>Oh and it enables distros to gather accurate boot times and enables booting into UEFI direct from the desktop.<p>It works with secureboot/shim/Hashtool. Also each distro has it's bootloader entries in separate folders to avoid accidental conflicts.
If you are wondering why many people <i>hate</i> updating working systems, no matter what the security implications, look no further than this.<p>Time and again it is an innocent security update that would end up in a reinstall, finding a bunch of bloat ware on a system, losing critical functionality, data loss and time lost.<p>Updates should be restricted to the absolute minimum and tested to the point that deploying them does not put customer data at risk.
The stablest systems I had the pleasure to manage were two identical rhel6 clusters into different geographical locations for higher availability and fault tolerance. Such systems were installed, turned on and never touched again. Kernel 2.6.32 that managed about six-seven years of uptime up to mid-2019. We operated a lot onto such systems, mounting and unmounting iscsi devices, starting and stopping stuff, turning on and off network interfaces and clustered filesystem (thanks to Veritas cluster manager).<p>The key move was never updating.<p>Such systems were literally mission critical, without that cluster the whole company was unable to produce its main products.<p>Considering how much stuff they ran and how many simultaneous users were connected, I was humbled by their stability (and by rhel's stability).<p>If you're getting angry at this post: the customer was not in the it field and was completely okay with buying new hardware and doing a full reinstall every X years.
The link is to a redhat bug report, but the issue is also affecting other distros: <a href="https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509" rel="nofollow">https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509</a>
This is why I dislike grub. It's really, really bloated. A bootloader just needs to pick the partition to boot, and little else.
I switched to gummiboot ages ago, and it's so simple. There's far less to go wrong (gummiboot got absorbed by systemd, so it's now called systemd-boot)
After reading this I decided to downgrade my Ubuntu machine for now until it's figured out. There are instructions here: <a href="https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2SecureBootBypass" rel="nofollow">https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2Secu...</a> under the heading "DOWNGRADE `GRUB2`/`GRUB2-SIGNED` TO THE PREVIOUS VERSION FOR RECOVERY"<p>Under the heading is a small shell script that will download the old debs for you. Note that for it to work and not have wget spam 404:s, you have to update the entire GRUB2_LP_URL and GRUB2_SIGNED_LP_URL to the links in the little table. At first glance it looks like you only have to change GRUB2_VERSION and GRUB2_SIGNED_VERSION.
Grub2 is not a bootloader. I'm not even really sure what it is. In Grub 1 you had a configuration file with the operating systems you wanted to boot. Simple, effective. With grub2 all I'm seeing is a bunch of sh scripts that are impossible to write by hand.
On this subject, a few months ago I had Ubuntu servers suddenly failing due to a bizarre automatic snap update that basically took most of our Docker containers down. Did anyone experience this with production systems?
Well, a system that can't be booted is totally secure against many kinds of remote & local attacks. So, "mission accomplished", I guess?