I really like this framing. It reminds me of how early steam engines needed governors—a simple mechanical feedback loop—to prevent them from spinning out of control. The engine itself wasn’t "safe" or "stable" by design; stability was something imposed externally through a control mechanism.<p>In a way, LLMs feel similar. Their internal workings may be probabilistic and unpredictable, but that doesn't mean we can't build external feedback loops—tests, validation layers, human oversight—to steer them toward reliable, useful outcomes. The unpredictability isn’t a flaw; it’s just a raw, unmanaged state that invites control systems around it.<p>Maybe what unsettles people is that the "chaos" is now at the language layer, where it feels more personal and less abstract than when it's buried in hardware or OS internals. But we've always tamed unpredictable systems with good design—LLMs are just the next place to apply that thinking.