Multiple dimensions.<p>The common theme why people get confused about "superior" products not winning boils down to ignoring the <i>multiple dimensions</i> of quality. A product's <i>overall</i> "superiority" is a single score that compresses multiple scores of the various attributes of the product. If one is ignorant (or discounts the value) of the other dimensions, he will be perplexed why the supposedly "inferior" solution won.<p>E.g. I never understood why bicycles didn't beat out cars. Bicycles are obviously superior because:<p>+ narrow profile can squeeze through tight alleys or even heavily treed forests that cars can reach<p>+ requires no fossil fuel that emits pollution<p>+ costs less than 1/100th the price of a car<p>+ easily repaired by homeowners in the garage because no computers<p>+ etc, etc<p>That fixation on those attributes causes the confused person to totally miss the <i>other positive attributes</i> of the car:<p>+ travel faster than 25 mph with minimum physical exertion<p>+ typical size car can carry 5 adults which is ~1000 pounds of weight<p>+ carry entire week's worth of groceries (~10 bags)<p>+ occupants don't get wet when when it rains<p>If one doesn't understand the <i>all the multidimensions</i> of qualities and weigh them in an objective manner that's detached from personal preferences, he'll always be confused why "superior" USENET lost to "inferior" Reddit, or why "superior" Lisp lost the popularity contest to "inferior" C++/Java/Python/Go/etc, or why Mac OS 9 lost to Windows NT.<p>Likewise, there were multidimensional factors to Betamax vs VHS and "picture quality" was only one of them.
There is a drug used to treat a form of eye disease that causes blindness. The product, eylea, has dominated the market despite (until recently) having no evidence that it improves vision vs the competition, provides no significant improvement on other relevant clinical endpoints, had no major safety benefit, and it is priced at a comparable or higher level than all competitors<p>A development stage product had shown better potential for vision improvement without major safety concerns, but when I interviewed physicians to ask whether they'd still prescribe eylea if this "better" drug was approved. They unanimously, without hesitation, said they'd prescribe eylea<p>Why would doctors treat eye disease with a product that does a worse job of treating eye disease? Why would patients accept a treatment that does not optimally restore their vision? There was nothing in the clinical literature to suggest this made sense. This must be a case of a large, established pharma company playing dirty, maybe bribing doctors or brainwashing patients with tv ads<p>The reality is that eylea required fewer doses than the competing products. For this drug, dosage means injection in the eye with a big needle. Further, physicians don't get paid more for doing an additional dose, so for them it's lost revenue. Turns out the incremental vision improvement does not offset the benefits of less frequent dosing<p>Getting this insight is not a feat of data analysis or scientific brilliance, but simply talking to customers. And getting this right, in this case, means winning a $4-5 billion market
As people have said already, you need to define what criteria you are using when you say something is "superior". On HN, there seems to be a divide between engineers who measure a product by its engineering quality vs. business folk who define product quality by its market success. They are not always the same thing. The reason "inferior" products succeed is because they may have some killer features desired by the market, or better tech support, or better sales and marketing -- in other words, the business quality beats out technical problems. Ideally, you'd have high quality in both areas, but that isn't reality in most organizations.
People usually do confuse popularity with merit. They are in fact, two completely different, and often fairly unrelated, characteristics. The keyboard that happens to have the most market share is the most popular one. It is not the superior one.<p>Start with the example of the top ten most popular pieces of music at the time. Do we generally conclude that these are the most meritorious musical compositions of the moment? Or are they popular because they happen to be catchy, or just raunchy enough to play on our base desires but not enough to be censored, or because the distributor has a deal to continuously play the song on the radio?<p>QWERTY is still popular mainly just because of momentum. At the time it came out many decades ago, it had some nice utility. But clearly the core idea of slowing down typing became outdated long ago. But when it comes down to it, learning a new keyboard is not easy.<p>To switch to a new way of doing things does not depend on having a better way. I believe it depends on some type of social networking effects and chance. For example, if several celebrities suddenly decide to create a Twitter campaign about the evils of QWERTY and incredible qualities of DVORAK, then it for some reason came standard in a new hot electronic device that happened to be trendy among teenagers, perhaps we would see a significant market switch.<p>This is one thing that I have noticed about some posts on HN. People will come on and say that their startup failed, and then come up with a list of rationalizations blaming various material qualities of their product or service. In most cases I believe this is quite an incorrect interpretation of events. They usually have a perfectly good or often superior product, which simply did not catch on. So I think that things like marketing are quite key to becoming popular, but again, being popular or not doesn't substantiate or unsubstantiate the quality of the product.
It depends what you mean by superior.<p>If you mean superior in a technical sense (which unfortunately seems to be all a lot of startups and engineers focus on), then obviously there are tons of instances where 'superior' alternatives fail to catch on, simply because the 'superior' alternatives lack the non technical benefits of the older products or services.<p>Like how a new social network might be more decentralised and censorship resiliant than Facebook, but lack the community/userbase that makes Facebook valuable to begin with. Or how a CMS system might be better coded than WordPress, but have a UI that people find more awkward to use (or an install process that's overly tedious/annoying for non technical folk).<p>In that sense, a lot of superior alternatives fail to catch on simply because for all the 'objective' improvements they make, they just don't do what people need them to actually do. A community site or social network doesn't need the best codebase possible, a long laundry list of features or a censorship resistent setup, it needs a big enough community/userbase to get people invested in it.<p>Of course, even when all factors do line up... well, that doesn't exactly guarantee the alternative with succeed either. People aren't robots, and don't choose every action to be as rational as possible. The difference between two competitors can just as easily come down to pure luck, some perceived emotional connection, timing or anything else, not necessarily the quality of the product or organisation behind it.
Often, because in the calculation of the provider, one factor never appears. Training-time, if a superior tool comes along, making it necessary to retrain to do the same work, its not superior, its inferior until the gain is so big that the productivity loss is visible to the customer.<p>That is why even with superior software it is necessary to add a "Legacy"-user interface, that allows for a easy switch from old software, while at the same time retraining people to use the new interface. This changes the transition costs.
There are lots of examples. For example a base 12 number system is superior to a base 10 one. Esperanto is strictly easier to learn than English and so would make for a better lingua franca than English does.<p>What doesn't happen very often is that the market chooses a worse technology when we each can individually benefit from picking the better one.<p>But the definition of "better" has to be right. Often things that are worse on one axis are better on another. See <a href="https://www.jwz.org/doc/worse-is-better.html" rel="nofollow">https://www.jwz.org/doc/worse-is-better.html</a> for a famous essay about the difference between what makes software good versus readily adopted.
A related thought based on my recent experience.<p>One of my teams is currently using a mesh colorization system for our game, which we developed in-house. E.g. we have a game level where each mesh is assigned some color ID, such as "primary accent" or "background1" (but not an actual RGB color); and there are color schemes which we can apply to the level according to these color IDs to get different looks.<p>The first version of the color scheme system was "clearly superior": it offered a lot more possibilities for assigning color schemes to meshes, but it needed about 80 color values for each color scheme.<p>The second version of the same system was "clearly inferior" -- just 28 color values, and much less creative control over the resulting look of the level.<p>Guess which system was adopted? Right, the second, simple one. Despite offering less creative possibilities, the second system was smaller and thus could fit more easily into artists's heads, which produced better-looking results overall.<p>To sum up, "superior" is multidimensional. Dvorak can offer better typing speed, but it offers much less "habit compatibility" with existing keyboards in the real world, and thus is a worse time investment. Or, Haskell offers much better maintainability, refactorability and reliability, but dynamic languages offer faster time-to-market and easier-to-hire coders.
For programmers, QWERTY is actually superior to DVORAK in my opinion. The US keyboard variant more specifically as most language designs have been influenced by how reachable each punctuation is to that specific layout. And with auto-complete, the alphabet placement is not that important.<p>Actually if we were to rethink the keyboard based on the frequency of usage, it might make sense to place all the punctuation a bit closer to the home row for programmer keyboards.
When there's a new, far superior way of doing things, it <i>always</i> catches on.<p>But usually there are several different versions of this new way, with slightly different properties, trade-offs and non-product features (like marketing, support, compatibility, price etc) - which of these wins out is a crap-shoot. That's why some companies, investors and customers back different versions... and the pragmatic ones wait for a market leader to emerge.
To add a bit of insight to the conversation, beyond "it's a matter of more dimensions than one", which people have already commented on. This whole situation could also be seen from an economics perspective. Demand for a product, let's say a can of soda, is called "elastic" when customers will switch from one vendor to another purely based on price. If one vendor has a lower price than another, all customers, in this idealized model, will head directly to that place to buy their soda, leaving the more expensive one in the dust. In this situation, however, demand for the thing, keyboard layouts, is "inelastic". That means that the factor that we'd think would be the only property of keyboards to matter for customers, is not in fact all that matters. Another thing that affects whether they will switch is how long it will take to learn the new layout.<p>TLDR; demand for quality in keyboard layouts is inelastic. There are other variables in play. One of them is the fact that there is a lot of friction (in terms of setup, having to learn a new layout, etc) for customers to switch from one layout to another.
It's true that there isn't necessarily much difference in typing speed between QWERTY and Dvorak layouts, for trained typists (even the same typist). However, we might say that we're throwing the baby out with the bathwater when we claim that Dvorak is technically not superior because weak studies showed little to no marginal improvement in a few particular metrics.<p>Modern work is a marathon of text entry, and in light of this I believe that what is technically important about a keyboard in the long run is not speed or even accuracy, but ergonomics. I won't make any general claims, because it seems like it's never been possible to study, but as an anecdote I can offer my own experience. I used to type on QWERTY until I had to stop due to repetitive strain. In the fifteen years since I switched to Dvorak I have never had a single day when I finished work feeling any pain in my hands, whereas this was almost an every-day occurrence with QWERTY. I can type for hours without stopping, without pain. The design of the keyboard makes it pleasurable to type in English. Words roll off the fingers of both hands in relaxing patterns. My hands move very little. It's nice, and I would suggest it as an option to anyone who types or programs a lot and is having trouble with their hands, not because I have any financial or personal stake in it, but because it really helped me and I care about the health of my fellow tech workers. I have seen a lot of people taken down by their hands, and I have also seen many people try some crazy gimmicks.<p>In the same space, not considering the placement of the letters on the keyboard, there is an even more absurd technical anachronism embedded in almost every single keyboard on the market, including virtual ones on our phones. The keys are positioned not in clean vertical rows, but offset as if they have mechanical arms behind them. This is pure path dependence, and there is no conceivable reason why we are stuck with it thirty years after mechanical typewriters fell out of use except that those who learned on a mechanical typewriter couldn't even imagine to design or test a different positioning of the keys. It's not just the weird zig-zag pattern of the position of the keys that is anachronistic. Why should the backspace and delete keys (which are so essential when we are typing in the flexible medium of digital text) be relegated to the far corner of the keyboard? (TypeMatrix presents an example of a modern reconceptualization of the layout. I'm not affiliated but I do enjoy their products.)<p>To summarize, I think that this article presents a rather limited (and even ad hominem) attack on the keyboard issue, with acknowledgment but little appreciation of the degree of path dependency in tech development. How can Dvorak be better if the research was flawed? This is not a complete answer.<p>Of course we are going to end up in suboptimal equilibriums, and together we should appreciate this if we ever want to get out of them.
While i agree with the sentiment that the supposedly superior product isn't actually that superior all the time, the given examples are a bit cherrypicking-ish. For example the BetaMax vs VHS comparison is about image quality, not length (and FWIW BetaMax could record 2 hours, but it is true that VHS thanks to its size could record more than that). Similarly the mammals vs birds comparison feels a bit stretchy - like calling a tomato salad a "fruit salad" because tomato is technically a fruit.
I remember OS/2 was allegedly superior To Windows 95. But it failed because there were far fewer applications for it. IBM had to to pay Netscape to finally port their browser to it.<p>I used to say OS/2 was HALF an OS, as it was missing a very important half of the equation; all those annoying applications.<p>Plus I could never get it to stay installed on an IBM PC. I did it once, but never could do it a second time. So that third half of the equation was pretty weak too.
I recommend that people take a look at “Diffusion of Innovations” Rodgers 1962 and all of the subsequent work since. It doesn’t directly address usability and design, which I believe are also important, but it addresses it indirectly.<p>[1] <a href="https://en.m.wikipedia.org/wiki/Diffusion_of_innovations" rel="nofollow">https://en.m.wikipedia.org/wiki/Diffusion_of_innovations</a>
Why? Because context matters.<p>That is, for example, just because BetaMax had a superior Feature X doesn't mean that Feature X matters to enough people. That feature exists in a broader expectations + wants + needs "eco-system." That momentum can be very difficult to redirect once it gets rolling.<p>Yes, a unique / superior feature _might_ be what wins you the market but it's not always that simple.
I do think that superior alternatives will catch up eventually if they stay in the game long enough... The problem is that sometimes the window of opportunity can be suppressed for very long periods of time; sometimes several generations... Maybe even several centuries.<p>Great one-time success can create a protective buffer which allows inferior traits to persist through long periods of time.
> It is often said that birds have far superior lungs than mammals. So mammals are failures compared to birds…<p>I've never heard this argument once in my life, and Google doesn't come up with too much discussion on it beyond a few articles discussing the differences. This point struck me as a bit of a stretch, and the blog post itself is probably stronger without it.
Microsoft became Microsoft because they were better at marketing/business in the technology industry, not because they were better at building and shipping technology.<p>Amiga workstations and Macs in the late 80s were way ahead of DOS's UX with its 640k RAM limits and poor CGA graphics capabilities. But they became the standard and caught up with the graphics and multimedia capabilities of the other two platforms 20 years later once they could invest into removing their technical debt.<p>In contrast, Amiga died a slow and painful death because of mismanagement and owner squabbles, even though they were used for much more than home gaming (real time TV station visuals) even into the early naughties. They were just much better than the alternatives. And ahead of its time as a technical platform and home PC.
I agree<p>"Superior" has to be followed by: "superior to whom?" and "superior in what aspect?"<p>"X is better than Y" but X does not do what Y does then it's not superior if it won't solve a problem.
UI has the big problem of power vs casual/noob user and everyone stars as a noob. There are good compromises, usually the power-user tool is hard to get in to and more efficient.<p>Similar for switching away from QWERTZ.
Someone once told me that the porn industry played a part in the VHS/Betamax and Blu-Ray/HD DVD battles. While Betamax and HD DVD were technologically superior, utility drove adoption.
Important topic but strawman examples in many ways.<p>In most of the important cases, you will have never heard of what could have been.<p>Also, often the problem is that better ideas weren't developed as much as they could have been. That is, we look back and compare the inferior but dominant product that was revised and revised because of it's problems and dominance, and the superior but abandoned product in it's unimproved state, which is some sort of survivorship bias.<p>The OP has a point but it's exaggerated.
I believe that the 90-90 rule [1] proves that many superior alternatives will fail to catch on. If something is not revolutionary better, and has the potential to compete with a product, but has not been developed to the point of actual competitiveness, it will lose out.<p>Combine this with network effects, and it seems that it would be impossible for superior techs not to fail.<p>Take email headers as an example. They are objectively inefficient and hard to parse. Everyone agrees on that. But we still use them despite the obvious possibility of a better format. Why? Obvious network effect.<p>The 90-90 examples will be less obvious. You cannot easily tell which tech is better, until you have invested the same amount of effort developing all of them. Superior products will be more likely to eventually succeed in markets where such parallel development is justified. Currently, most speakers are made by gluing a permanent magnet to a piece of plastic or paper, and using an electromagnet to pull on the permanent one. Through parallel development, we now know, that we can make much much better speakers by making the membrane magnetic, by gluing copper traces onto it, or statically charging it. We can make objectively better sound with these lighter membranes which don't have a heavy permanent magnet glued to them. Slowly, expensive speakers and headphones are using the new tech, but most speakers sold sill use the inferior tech. The superior tech was developed parallel to the inferior one, only because speakers are such a large market, that this parallel development could be justified financially.<p>Now take braille displays. A much smaller market. We think that the current tech used in Braille displays is worse than a set of other techs. There isn't enough money for the parallel development, so despite the fact that alternative, probably superior, technologies exist, we haven't developed them to the point of competitiveness.<p>[1] <a href="https://en.wikipedia.org/wiki/Ninety-ninety_rule" rel="nofollow">https://en.wikipedia.org/wiki/Ninety-ninety_rule</a>
Most of the time. Take a look at developer tools for instance:<p>jenkins,nagios,graphite,jmeter,mysql,postgresql,puppet,vagrant,virtualbox,mongodb,npm,docker,openvpn.<p>What all these software have in common. They are the most popular despite being poor products and often having notably superior competitors.