This reminds me of the drawing of WWII bomber planes with bullet holes mapped out, showing that the critical systems were where there were no holes, not because the hull was strong enough, but because a hit there would cause the plane not to return and be recorded.<p><a href="https://en.wikipedia.org/wiki/Survivorship_bias" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Survivorship_bias</a>
There's an episode of Sienfeld about this. George realizes that he always makes the wrong decision so he decides to do the opposite of whatever he would normally do, and it ends up working out for him pretty well.
Perhaps Charlie Munger's enthusiasm for this sort of inverse reasoning explains his (now cancelled) plan for the windowless dorm building he designed for UC-Santa Barbara -- he was creating the worst possible dorm building in order to get insights into what a good dorm building would be.
This is basically the idea behind Chaos Engineering. Think of all the ways a system can break, make it happen, and see if we survive. If we don't, fix it so we do next time.
Is this not just the general approach to system design?
When I’m writing software, my first thought is list everything that could go wrong?<p>Or am I just a hopelessly anxious person lol
Another "think a bit differently" trick, which sometimes helps me, is to ask myself what I would advise <i>someone else</i> to do, in a similar situation.<p>This can help get past some individual biases.
I’m not really fond of this concept.<p>I think the simple idea of “risk analysis” is much more intuitive and better captures the idea that is being conveyed.
I don't think the value of inverse reasoning is that it lets you think of new ideas. If you're trying to list all the ways something could go wrong, you're pretty much just listing the ways they could go right, and flipping them. What I mean is, I think your list of bullet points for "how to have a great career" would be able as long as your list of bullet points for "how to have a terrible career". Both lists would usually contain the same essential information, just framed differently.<p>Where I think such a practice can be useful is in forcing you to confront unpleasant possibilities you would otherwise try to ignore, and thus at least briefly plan for them.
An interesting concept. If you want to succeed but cannot see the solution, spend a lot of time thinking about how you might fail, then just don't do that.
I saw something like this for suicide. Dark, and fairly graphic. All the possible ways you might want to kill yourself, complete with descriptions of how it usually goes wrong and you'll just be maimed for life instead. The intent was clearly to convince people not to do it.
One of my favorite versions: when Steve Jobs immersed an early iPod in water to see if any bubbles came out, and therefore it could be made smaller/tighter.<p><a href="https://www.google.com/search?q=steve+jobs+ipod+water+air+bubbles" rel="nofollow noreferrer">https://www.google.com/search?q=steve+jobs+ipod+water+air+bu...</a><p>Another is when Feynman ducked a piece of o-ring in a glass of ice water, causing it to fail.<p><a href="https://www.google.com/search?q=feynman+challenger+o+ring" rel="nofollow noreferrer">https://www.google.com/search?q=feynman+challenger+o+ring</a>
See also: <a href="https://en.m.wikipedia.org/wiki/Duality_(optimization)" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Duality_(optimization)</a><p><i>In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem.</i>
Same thing applies to figuring out what you want. First identify all the stuff that you don't want - particularly things you already have and are willing to get rid of. Getting rid of things really reinforces the notion that you don't want them. This also makes space and frees up time to explore things you do want even if you're not sure what that is yet.
I feel like there is/should be some kind of programming language design equivalent of this. If you wanted to make programmers basically incapable of writing correct programs, how would you do it?<p>Here's where I'd share examples and it wouldn't be funny and instead start a flamewar.