The same dynamic shows up in a number of other systems. For example:<p>The Peter Principle. People who are performing well in their current role get promoted out of that role, until they reach their level of incompetence. At that point, they get stuck. Eventually the whole organization consists of nothing but incompetent people.<p>Gerrymandering. A political incumbent who redraws his district to include more supporters will last longer than one who doesn't. Eventually, <i>all</i> districts are gerrymandered, and all incumbents are virtually unassailable.<p>Vendor lock-in. A company that promotes consumer choice is easy to switch away from; one that promotes lock-in, by definition, is hard to switch away from. Eventually everyone will be buying from vendors they're locked into. (Unless the former company's products are <i>much</i> better - this was the strategy Google pursued until ~2011. Note, though, that they achieved this through some measure of <i>employee</i> lock-in.)<p>Basically the only requirement is that "good" solutions are more liquid than "bad" ones. Many, many systems exhibit this property.<p>You could look at most of modern society as a way to generate feedback loops <i>on top</i> of this dynamic to mitigate it. For example, an organization full of incompetent people is likely to go out of business and be replaced by a new one, often one founded by the very people who were driven out of the original. (See: Disney => Pixar, Apple => NeXT, Shockley => Fairchild => Intel, Netscape => Firefox => Chrome.) Similarly, a company full of bad, hard-to-replace code either embarks on a complete rewrite, or they're vulnerable to a startup without that baggage.
I look at code as biology. It competes in its environment. As it becomes more complex it can fight off all competitors.<p>Code grows toward irreplaceability. That's why we are surrounded by code that is hard/impossible to replace.<p>We shouldn't be surprised if code feels like Kudzu after it has been around for 10+ years.
Exactly this happened with a lot of Smalltalk projects. The ones which were structured such that the Smalltalk Refactoring Browser parser could be used to accelerate porting projects -- usually the better architected and factored codebases -- could leave Smalltalk for other programming environments. Even if syntax driven code translation wasn't used, the better factored projects were still easier to port.<p>(And to head off the usual criticisms of automated code translation, this tends to work well, when the project has well adhered-to coding standards and patterns, so that idiomatic code in language A can be matched and translated to idiomatic code in language B. In other words, if there is a consistent use of project-level idioms, it's easy to do good idiomatic translation at the language level. The other necessary ingredient is a powerful parser+meta-language which can fully express the capabilities of the source and target languages.)
And this is why I hate systemd: its primary design criterion, it seems, is to be as difficult to replace as practically possible -- in stark contrast with sysvinit, OpenRC, etc. Once it's suitably entrenched it can simply be declared "the standard" and then Linux systems without systemd will fall out of compliance and hence out of support by the greater ecosystem.
I disagree with Sustrik's assumption that software drifts to become a collection of non-reusable components. His observation is an interesting theory but I think it breaks down because it could only really be a "law" if it is true that reusable (does he mean replaceable?) components are always switched out for irreplaceable ones.<p>Most projects I've worked on start out hairy messes and if they are cursed with success new requirements will eventually justify the cost of replacing the irreplaceable. Well designed components aren't quickly switched out for poorly designed ones because developers don't want to let that happen. Its not safe to assume that poorly written components are better suited to survival... to the contrary, they are the most likely targets for removal in the first place.
You could see this in action with GCC if I remember correctly; they purposely made it a monolith so that it was harder for proprietary plugins to be added; therefore only GPL'd and LGPL'd components would be worked on.
The Gresham's law analogy is a crock. Gresham's law happens because the government compels merchants to accept the bad money as being equivalent to the good, but it can't effectively compel customers to spend the good money.
This makes sense at the level of programmers as well. A programmer that does their job well is easily replaceable because the code they write is easy to maintain and so they will be replaced by someone who is not as good of a programmer and writes less maintainable code.
I think this is not so much a "law" as a failure mode. It up to the programmers, objectives, time constraints and economic considerations whether a concerted effort is made to increase code quality or to go for the "quick fix" at the expense of long-term maintainability.
I'm currently tending a 16 year old Java codebase, with very little in the way of maintenance in the years since it was written.<p>"Software over time tends towards monsterism." is apt in my mind.
With the amount of code developed in open git repositories, surely it is possible now to quantify these observations? Bring hard facts to the discussion.