> OS Application Binary Interface (ABI) release inter-compatibility is the cancer killing the modern operating system.<p>I think that's only true perspective of an OS developer:<p>ABI inter-compatibility (e.g. the Windows and Linux model) prioritizes <i>customer</i> experience. Customers hate it when their applications stop working, and application developers don't want to spend lots of effort to track platform API changes just to avoid breakage.<p>Abandoning ABI inter-compatibility (OpenBSD, Apple) prioritizes <i>platform developer</i> experience. They want to be able to freely make API changes and don't want to spend time maintaining old APIs that they could use to work on new APIs.<p>I think the problem with the latter is, while there may be a few hundred developers working on a particular OS, there's are orders of magnitude more customers and application developers. Totally abandoning ABI inter-compatibility seems like putting the interests of the very few over the interests of the very many.
It seems like focus on developer ergonomics over actual users has been monotonically increasing since I started my career. It may be great for the OS developers that they don't have to care about back compat but it is terrible for the users. Your API may be beautiful but my software no longer works, so the system is useless.<p>In the case of Linux everyone is now shipping entire userlands with their applications via docker just to workaround compatibility issues. We'd be shipping entire VMs if the kernel wasn't the only one holding the line on compatibility.<p>Its been a long time now since I saw a programming post talking about how some new paradigm or way of doing things would make life great for the users.
One of the most horrible things about iOS is that it breaks your apps every year.<p>This is a terrible experience for customers (since their apps break every year) and for developers (since they have an ongoing maintenance burden dumped on them by Apple just to keep their apps working across yearly iOS updates.)<p>The main beneficiary of abandoning ABI compatibility (as Apple has done) is the platform developer (e.g. Apple) who avoids the maintenance burden of backward compatibility.<p>It's arguably the wrong approach because it helps the platform developer (Apple) at the expense of existing customers and developers. There is multiplicative burden of pain - each time Apple breaks something, millions of customers and thousands of developers pay an immediate price.<p>There is a long-term user benefit to platform evolution, but the short-term cost is relentless and ongoing.<p>For game developers in particular, the stability and backward compatibility of Microsoft/Sony/Nintendo platforms is a dream compared to the quicksand of iOS development.
Breaking ABIs freely is a decision OpenBSD made that's been helpful in many ways, but the lesson we should learn from them is to engineer layers of failure mitigation into all our systems. Software bugs are unknown unknowns.
Selecting a BSD comes with an implied social contract regarding its mutability across versions. If you go into OpenBSD believing code from n-3 runs on version n+1 you misunderstood the social contract. FreeBSD or NetBSD or DragonflyBSD might have a different social contract.<p>Selecting OSX used to imply much more attempt to handle this, maybe n-3 is outside the goal but n-1 and n+1 kinda works usually. Except when things like "we don't want 32 bit any more" hits, after 2 or more years of heads-up. Turns out vendors don't want to incur that cost. Stuff which people want and "depend on" as Kext don't work.<p>Consider how python2 dependencies are going in a world of Python3, and thats userspace, not ABI. Its not the OS, but.. its similar.
> Otherwise various efforts making use of containers, lightweight virtualization, and binary wrappers for the purposes of introducing new options to companies allowing them reasonable backward compatibility for the various applications that have become entrenched in their organizations will be the only way to break away from the stagnation of the current paradigm of enterprise operating system development.<p>That was essentially what MS did with "Windows on Windows" that brought 16-bit applications over to Win32. And Apple with Rosetta, the blue box, etc. These were hugely expensive because they had to track down all the unwritten interfaces applications use.<p>If Linux standardizes virtualization for enterprise support, applications should run in it all the time, so it's impossible for them to access any private interfaces.<p>And it's sustainable because when enterprises find they're stuck with these closed source applications, they'll have a direct interest in supporting maintenance of the older virtualization.
> Companies who make such investments often view the money they've paid for the development of this software in a similar manner to how they would view the investment into any other asset - which is to say that the expectation is that it will continue to function for years.<p>“Any other asset” is not informative. When my company buys me a laptop, the assumption is that it will continue to function for three years. When they buy me a chair, seven. When they buy a building, thirty.<p>That’s an order of magnitude difference in depreciation schedules. The two problems here I see are:<p>1) Nobody in the accounting department had any clue how to do this in the 1980s and 1990s. So their cost projections were badly inaccurate, and they didn’t have realistic depreciation schedules.<p>2) The contracting firms are not incentivized to do maintenance and don’t even know how to do it in the first place.<p>> As nice as backward compatibility is from a user convenience perspective when this feature comes as a result of a static kernel Application Binary Interface this trade-off is essentially indistinguishable from increasing time-preference (or in other words declining concern for the future in comparison to the present).<p>This absolutely <i>is</i> distinguishable. Backwards compatibility is a complex tradeoff, no matter who you are (OS developer, app developer, end user, etc). It’s as complex as opex vs capex (and probably more similar to that tradeoff).
This makes no sense. The bulk of the ABI compatibility is not in the kernel, and Linus's mantra of "not breaking userspace" hardly applies to applications from the Linux Foundation's most paying members. The bulk of the ABI for Linux applications comes from libc and other libraries.<p>The one case where breaking ABI would make things so much easier is y2038 but it only applies to 32-bit systems, again nothing that matters to the Oracles and SAPs.
> Linus Torvalds continues receiving his Linux Foundation salary paid for by the massive cheques it's member organizations cut him in exchange for influence over the kernel's development.<p>The author seems focused on that aspect as <i>the</i> reason Linus is against ABI changes. But in fact this was his stance for years as he's user-centric: people expect things to continue to work when they upgrade the kernel, so if you have to break their experience, you really need a very good reason. It's not like he's started thinking this way when he became an employee of the LF.
The writer mentions the corporate user, but then never mentions them again.<p>I use a Linux desktop. If I want old versions of open source stuff, Wine running the Windows binary is where it's at.<p>We now have a complete, futureproof free software stack with decades of backwards compatibility! It just has win32 in the middle.<p>This article makes the case for the advancement of OS development, but not for what people use computers for.
I think the stability of user space interfaces is simply good engineering. Linux can run binaries compiled way back in the 90s. Because of this discipline, people trust Linux as a platform. People generally have no problems updating their kernels and it's safe to assume there will be no problems. This isn't the case in user space: many projects have no problem with breaking compatibility and forcing dependent packages to be updated as well.<p>The author claims Linus Torvalds enforces Linux binary interface stability because the Linux foundation members that pay his salary want it. Is this really true? If that was the case, I'd expect the internal kernel interfaces to be stable as well. They are unstable and he actively fights to keep them unstable even though the companies would very much enjoy having stable driver interfaces.<p><a href="https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html" rel="nofollow">https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html</a><p>> Stuff outside the kernel is almost always either (a) experimental stuff that just isn't ready to be merged or (b) tries to avoid the GPL.<p>> Neither is worth a _second_ of anybodys time trying to support, and when you say "people spend lots of money supporting you", you're lying through your teeth. The GPL-avoiding kind of people don't spend a dime supporting me, they spend their money actively trying to debase and destroy what I and thousands of others have been working our butts off for.<p>> So don't try to make it sound like something it isn't. We support outside projects a hell of a lot better than we'd need to, and I can tell you that it's mostly _me_ who does that. Most of the core kernel developers argue that I should support less of it - and yes, they are backed up by lawyers at their (sometimes quite big) companies.
I will say, Windows 95 was pretty great, I identify with the Microsoft customer in the hero image. I'm gathering notes to write a GUI toolkit which only makes well-formed Windows 95-style UIs.
I authored this document on the Linux Device Driver model 11 years ago and amazingly it still represents the current policy: <a href="https://www.linuxfoundation.org/events/2008/06/the-linux-driver-model-a-better-way-to-support-devices/" rel="nofollow">https://www.linuxfoundation.org/events/2008/06/the-linux-dri...</a><p>Specifically, the Linux kernel maintainers, not the Linux Foundation, determine the policy that the user space ABI remains stable while the device driver API is unstable.<p>Disclosure: I work for the Linux Foundation, and I know that if we told the kernel maintainers to change their policy they would laugh at us.
This is all lovely as a matter of the platonic ideal of an operating system. But... the users have spoken. They don’t want their software to break.<p>Worse is better, and Microsoft got this one right.
The way I see it, VMs already encapsulate this. App --ABI--> VM'd Kernel -> Hypervisor API.<p>But we can do this much more efficiently. IIRC, Prior variants of this were called "personalities". I think the term's been reused now.<p>I think we could have the program loader consume the loaded program and act as an API proxy between it and the actual kernel.
This article helped me understand a lot; I knew development on iOS required constant updates, but now I know <i>why</i>. Thank you.<p>BTW, there are several misspellings of "its" in your article. Search for "it's" because most of them should be changed to "its"
no thanks. I have had the terrible experience of being forced to upgrade software purely because a newer version of macOS does not support the old version of my music software. I am looking at going completely hardware now for music production so I don't have to deal with unnecessary upgrade treadmill that is entrenched in computer culture.<p>EDIT: forced into paying for upgrade
Really interesting. I didn't know much about OpenBSD before, nor did I know that Windows/Linux maintain ABI compatibility indefinitely, although it makes sense.<p>It's also interesting to consider the web as an application platform in this context. It too has an append-only API that places high importance on indefinite backwards-compatibility. However, because that API is <i>dynamic</i>, not binary, the underlying implementation has much more room to maneuver and re-structure without breaking it.
> the project enforces a hard ceiling on the number of lines of code that can ever be in ring 0 at a given time<p>I tried googling to find what this limit is and where it's mentioned. Could anyone help me out with a link? What is the limit?
Maybe the answer is a rolling window of stability for OS APIs--something like 10 years (Windows 10 having Windows 95 compatibility mode is a bit absurd). On the other hand, if you have a large library of test software, maintaining API bridges might be doable, and for software more than 5 years old, performance on modern hardware shouldn't be a major concern.
And time to innovate in languages used for writing OS kernel and core services as well. All mainstream kernels stuck with C/C++. Something newer, cleaner, Rust, D, you name it, something that indeed wasn't afraid to deprecate legacy too, and something that offers many new important features for OS developers.