As a counterpoint, Postel's Law as implemented in other domains has been spectacularly successful.<p>One classic example is in transistor networks: each node in a network (think interconnected logic gates, but at the analog level) accepts a wider range of voltages as "high" and "low (i.e., 1- and 0-valued) than they are specified to output. In 5V logic, for example, transistors might output 5V and 0V to within 5%, but accept anything above 1.2V as "high" and below that as "low". (Sometimes called the "static discipline" and used as an example of the "robustness principle"—the other name for Postel's Law.)<p>This is <i>critical</i> in these networks, but not because transistor manufacturers don't read or fully implement the spec: it's because there is invariably <i>unavoidable noise</i> introduced into the system, and one way to handle that is for every node to "clean up" its input to the degree that it can.<p>It's one thing to rely on this type of clean-up to make your systems work in the face of external noise. But when you start rearchitecting your systems to operate close to this boundary—that is, you're no longer trying to meet spec, because you know some other node will clean up your mess for you—you're cooked. Because the invariable noise will now push you outside the range of what your spec's liberal input regime can tolerate, and you'll get errors.<p>The problem isn't Postel's law. It's adverse selection / moral hazard / whatever you want to call the incentive to exploit a system's tolerance for error to improve short-term outcomes for the exploiter but at long-term cost to system stability.