"the machine must present a single address space which can be considered non-volatile"<p>That doesn't solve the problem, or it is at least an oversimplification. How do you ensure that you are only writing out valid states, and not some state that is temporarily invalid (i.e. in the middle of some operation)?<p>I'm sure there's an answer for that, but that seems like totally the wrong direction. That's what a DBMS is for, and/or a modern filesystem with DBMS-like features.<p>I think that DBMSs and the fancy features in new filesystems are underused by applications. Perhaps that could be improved through greater standardization of the way to access basic ACID guarantees. Or, perhaps OSes should have these things more built-in and the lower-level APIs that are tricky to get right should be more obscure.<p>In any case, an oversimplified notion of persistence does not improve matters. Virtual memory just seems like the wrong place to solve these problems. How do you do a "rollback" of some operation when it's a direct modification of the in-memory state? (Note that a rollback is not the same as going back in time -- a rollback surgically undoes a single operation without affecting concurrent operations). If nothing else, a program bug can do a lot more damage (sometimes subtle and not detected for a while) within its own address space than it can to data already held by a separate program (like a DBMS or the kernel's filesystem) with its own consistency guarantees.<p>I think one of the most common software engineering mistakes is to not understand the role of a DBMS in application development, or to drastically oversimplify it (usually leading to a bad reinvention of the DBMS). The reality is that most applications would be greatly simplified by using a full DBMS (typed data, etc.), and almost all applications would be greatly simplified by relying on ACID guarantees.
For the record, this guy is either going for the long troll writing these sorts of absurd nostalgic pieces, or is really this enamored with the classicist "pure" world of computing of the 70s that he's still stuck in. He's been writing these sorts of patronizing, wringing-of-hands pieces for years that honestly belong more in comp.lang.lisp more than anywhere else. Always yearning for the good ol days and talking about how we're in some sort of technological Dark Ages is his thing. Shame he's such a verbose critic of everything new to come across his way instead of someone who actually builds or does something.
<p><pre><code> About as interested in building — or even permitting to exist
— the cheap, legal, easily-user-programmable personal computer
as Boeing or Airbus are in the cheap, legal, and easy-to-fly
private airplane. The destruction of HyperCard alone is proof
of this.
</code></pre>
It's kinda hard to take this seriously.
The OP seems to forget that code is a living, breathing thing, and it runs in a changing environment. Versioned file systems are a good idea now because storage is cheap; they were a bad idea years ago. And Apple isn't avoiding a versioned file system because they hate you, or because of profits; they're doing it because they have to stay backwards compatible with programs that don't know how to work with a versioned file system. Fundamentals matter; agreed. But unfortunately, you can only choose fundamentals once, unless you (like, say, the Linux kernel) can rewrite all software that uses your system. So instead of whining about people choosing the wrong fundamentals while abusing colorful analogies, do some research on helping ordinary programmers write future-proof systems. Think: what would have been a design that would allow us to transition from non-versioned to versioned file system. And remember that Plan9 didn't win, Linux just slowly stole features from it (and is continuing to do so).