> <i>In the 1960s, the hardware that comprised the byzantine switching systems of the first electronic phone networks would occasionally cause a misdial. Instead of revealing the mistake by disconnecting or playing an error message, engineers decided the least obtrusive way to handle these glitches was to allow the system to go ahead and patch the call through to the wrong number. Adar says most people just assumed the error was theirs, hung up, and redialed. “The illusion of an infallible phone system was preserved,” he writes in the paper.</i><p>Then later:<p>> <i>One relatively benign class of examples occurs when an operating system fails in some way and a piece of software is programmed to cover up the glitch. The misdials of the early phone switching system fall into this category.</i><p>Really, that depends how you're covering it up. The case of the telephone network is clearly <i>malicious</i>. It didn't do much harm, but it <i>was</i> harmful, and the benefit was all for the owner of the phone network: they basically caused more mis-dials than was necessary, wasting people's time in the process… then they tell the users it's their fault! (The lie was by omission, but no less efficient.)<p>Even in cases where short-terms benefits are clear, like placebo buttons or progress bars, I fear the lack of trust that it could pose long term. People already see their computers as independent thinking agents. Some of them are <i>afraid</i> of their computer, but use it anyway because they've become so dependent. Deceitful interfaces will likely reinforce this learned helplessness.<p>This kind of insidious, diffuse damage may be far worse than any observable benefits. The end doesn't justify the means when the remedy is worse than the curse.