Linus' and Alan's citations aren't incompatible. Actually, I think they're both true. Yes, massively parallel trial-and error works wonders, but if you favour the first solutions, you'll often miss the best ones. Actually, effects such as first time to market, backward compatibility, or network effects often trump intrinsic quality by a wide margin. (Hence X86's dominance on the desktop.)<p>Yes, Worse is better than Dead. But the Right Thing dies because Worse is Better eats its lunch. Even when Worse actually becomes Better, that's because it has more resources to correct itself. Which is wasteful.<p>The only solution I can think of to solve this comes from the STEPS project, at <a href="http://vpri.org" rel="nofollow">http://vpri.org</a> : <i>extremely late binding</i>. That is, postpone decisions as much as you can. When you uncover your early mistakes, you stand a chance at correcting them, and deploying the corrections.<p>Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done. If this were done, Intel wouldn't have wasted so much resources in its X86 architecture. It would have at least stripped the CISC compatibility layer over it's underlying RISC design.<p>But of course, relying on programmers to hand-craft low-level assembly would (and did) make you ship faster systems, sooner.