Would there have been a way to avoid this mess?<p><pre><code> - a browser must reject any invalid HTML in order to force the developers to fix their HTML
- a browser must try hard to make sense of messed up HTML, otherwise users will switch to a competing browser that renders the mess for them
</code></pre>
Theoretically all browser vendors could coordinate so that everyone rejects invalid HTML, but there is probably no good way to avoid defectors. Why did this not happen for other technologies? My first thought was that there is no compilation step which allows forcing the developer to fix things without giving the end user any power through their choice of browser. But that seems not quite right, why do Bash or Python or your C++ compiler not make a best guess what your code is supposed to do? Because there is or was only one dominant implementation and therefore no competition? Because document markup is much more robust against small errors and probably remains readable while your code likely just crashes? That is probably one of the most important ones, I think. What role did browser specific features, evolving standards and incomplete implementations play?<p>What is the end result? Nothing for the end user, they do not care whether the browser has to deal with nice HTML or a mess. Developer writing HTML get to be more sloppy at the price of a lot of additional complexity and pain where ever code has to deal with HTML. This might actually have some negative impact on end users because of bugs or security issues stemming from the additional complexity. Maybe it made HTML somewhat more accessible to the casual user as they could get away with some mistakes. But was this worth it, could better tooling not have achieved the same with good error messages helping to fix errors?