The nature of the Software Crisis has changed, but the crisis is still here.<p>At the outset of that, the predictions were simply that software wouldn't get built because it was getting too complicated and expensive. The 80's saw a lot of developments around programs using module systems, dynamic linking, etc.<p>But the software being shipped at that time was still mostly either "appliance" or "back-office" software: data came in through the keyboard, and left by the printer. Most memory discipline revolved around static allocation and overlays, and the microcomputers were not doing a lot of multitasking(although it did come in with the 16-bits). There were more interesting things happening in big firms and universities, but the hardware was also specified around the type of thing they were doing and there was admin staff physically monitoring those shared machines. There was a lot of software that, in Unix "do one thing well" fashion, could do very little, but also did it reliably. And there was a lot of eye-wateringly expensive commercial software of this type: thousands of dollars for a compiler or database.<p>In the 90's, software got uniformly bad for a while because the PC became so much more powerful in such a short time, and the approaches that worked before turned into a moving target of "plan for the machine spec of 18 months from now". Wintel machines just weren't made to do the things they became capable of, and you had little way of knowing whether your crash was the application, Windows, or your drivers. Macs were no better.<p>There were really three things that crept to the forefront in this period:<p>1. The hardware being commoditized, but not open. To this day, Nvidia wants to control their drivers. And as long as we keep buying from them, they can dictate part of the software stack. They are hardly alone - every big player knows the game.<p>2. Essential complexity being addressed with accidentally complex protocols. For example, everyone uses UTF-8 now. It addresses a significant issue in a reasonably good way. But you still have a lot of systems in the wild using UCS-16. USB is designed to do everything, which gives it a high "floor price" for a system implementor compared with the classic parallel/serial mechanisms. The web browser ended up with Javascript. And so on.<p>3. A drift towards financialization within software. The "dot com" hype of the late 90's was what it was because the VCs had found a formula for getting companies to IPO with no revenue. When shrink wrap software became a business, it had lots of competitors, but by the 90's consolidation meant there was one "industry standard" per industry(Microsoft, Adobe, Autodesk, Oracle, etc.), and it became more interesting to target consumers with Internet appliances. This project of building the ultimate consumer platform describes most of the past 30 years or so, in different phases and across different facets.<p>If it weren't for open source it would not be possible to host so much complexity. Kicking things down a layer to a dependency has been the way in which the software crisis has been handled, but a lot of the hasty or accidental standards are still standard.