I was a big fan of cognitive psychology and biases and tried to read as much books as possible on the subject. Even had a mind-map on finished and TODO books (<a href="https://ic.pics.livejournal.com/buybackoff/8746464/10862/10862_original.gif" rel="nofollow">https://ic.pics.livejournal.com/buybackoff/8746464/10862/108...</a> upper-right corner).<p>I think the best practical material on the subject is Charlie Munger talks. Particularly his talk "On the psychology of human misjudgement" (<a href="https://buffettmungerwisdom.files.wordpress.com/2013/01/mungerspeech_june_95.pdf" rel="nofollow">https://buffettmungerwisdom.files.wordpress.com/2013/01/mung...</a>) and "Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger", which is an edited collectoin of his many talks.<p>My main conclusion from reading all the books/talks is that you could only be aware of the existence of the biases, but cannot realize which one(s) is(are) at play now in your brain and cannot "fix" a bias with any cognitive effort. So "sleeping on"/delaying an important decision is the best practical way I have found to mitigate the always present pervasive biases.
I am wondering which of these are "solid" enough to be considered? I am asking because of the "replication crisis" which affected also Kahneman et. al.<p>EDIT: I am <i>genuinely</i> interested in knowing, since it would be helpful to know which of these are reliable - in order to change my behavior accordingly.<p><a href="https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/" rel="nofollow">https://www.theatlantic.com/science/archive/2018/11/psycholo...</a><p><a href="https://replicationindex.com/category/kahneman/" rel="nofollow">https://replicationindex.com/category/kahneman/</a>
How many of these will appear naturally in powerful AI systems?<p>Perhaps many! Maybe by trying to emulate a human brain we will end up recreating its flaws.<p>I am very excited in the progress of deep learning applied to symbolic, logical reasoning, like theorem proving. Theorem verification is easy and tractable, proving is not.<p>We can have heuristic algorithms come up with provably correct algorithms! That is vaguely analogous to a human writing a program then proving it correct. Now that will be useful.
Some of which may be replicated.<p>I'd be tempted to down-vote myself for snarky trolling except that I work in the field of psychological research, and perhaps it is my bias, but many of the cognitive biases that came from social-psychology research do not stand up to scrutiny, too frequently resulting from bad statistical practice...at least two decades ago.
This is purely my opinion.<p>All theories within Psychology and Economics are based on people being 'rational'. Any thing contrary to their theory is branded 'irrational' and given a name. The name usually sounds like a 'disease/ailment'.
I once tried memorizing this list of cognitive biases but eventually came to the conclusion they were ill-defined and in some cases, not even biases at all, but a heuristic to keep me alive and well-functioning.
It's an alphabetically sorted list, sure one can read the whole list top to bottom, but it just doesn't <i>flow</i> very well.<p>If you're interested in rationality and cognitive biases, I'd highly recommend reading Eliezer Yudkowsky's "Rationality: A-Z" sequences: <a href="https://www.lesswrong.com/rationality" rel="nofollow">https://www.lesswrong.com/rationality</a>
Thinking Fast and Slow explains many cognitive biases.<p><a href="https://en.wikipedia.org/wiki/Thinking%2C_Fast_and_Slow" rel="nofollow">https://en.wikipedia.org/wiki/Thinking%2C_Fast_and_Slow</a>
I had a class at Babson about “Decisions”. Best class ever.
My favorite case was about decision making process at NASA that led to the Discovery disaster.
Along the case (you can find multiple versions online and it is an awesome read) there was this HBR article about flaws in decision making process.
“The Hidden Traps in Decision Making” by by John S. Hammond, Ralph L. Keeney, and Howard Raiffa.
<a href="https://www.researchgate.net/publication/12948100_The_Hidden_Traps_in_Decision_Making" rel="nofollow">https://www.researchgate.net/publication/12948100_The_Hidden...</a>
I think human rational thinking is completely f*cked. We are just not capable of thinking very logically/rationally.<p>I think all the more reason to meditate, be mindful and adopt philosophies that are not always rational, but good instead.<p>Also, the truth is often very complex or very dark, so thinking is only going to bring incorrect simplified (black/white) conclusions or negativity/resentment.
Being reminded of cognitive biases on a regular basis does wonders for staying grounded! I currently use a browser plugin for that but this poster seems like a better alternative — <a href="https://designhacks.co/products/cognitive-bias-codex-poster" rel="nofollow">https://designhacks.co/products/cognitive-bias-codex-poster</a>
According to Daniel Kahneman the research on whether biases can be overcome is "not encouraging". <a href="https://getpocket.com/explore/item/the-cognitive-biases-tricking-your-brain" rel="nofollow">https://getpocket.com/explore/item/the-cognitive-biases-tric...</a>
The list is missing "The Bias Bias in Behavioral Economics" (<a href="https://www.nowpublishers.com/article/Details/RBE-0092" rel="nofollow">https://www.nowpublishers.com/article/Details/RBE-0092</a>)
A more digestible format for this: <a href="https://busterbenson.com/piles/cognitive-biases/" rel="nofollow">https://busterbenson.com/piles/cognitive-biases/</a>
While I think it's good for pedagogical purposes to have a catalogue of many examples of where our thinking goes wrong, I worry that these lists can give off the wrong idea that our thinking is broken in so many "different" ways.<p>In some sense, many of these biases seem like specific instances of a more general phenomenon. For example, illusion of control and pareidolia both seem like they'd arise if you buy into the brain as doing [predictive processing](<a href="https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/" rel="nofollow">https://slatestarcodex.com/2017/09/05/book-review-surfing-un...</a>). So it's not exactly that we have over 100 ways that our thinking goes wrong, but that the same types of mistakes occur in different ways.<p>In which case, for preventative reasons, knowing the core mechanism at play seems much more important. Similarly, I feel that lists of mental models might also be missing the point; no one can really go through a list of 100+ items to figure out which one is at play. You're going to need a smaller, more general toolkit.
Wow, there are now 194 listed Cognitive Biases. The number keeps growing.<p>It is an awful sign for a scientific community when they are working on a theory that includes 196 different exceptions and adjustments that have to be made in order to make a model fit the data. It means that your underlying model probably isn't right.<p>This reminds me of when Astronomers thought the universe revolves around the Earth, rather than the Sun. The earth-centered theory made sense until we got better data, and then sometimes planets appeared to go backwards. Sometimes they appeared to swirl around a line. Sometimes there were swirls within the swirls, and sometimes swirls within those: <a href="https://invisible.college/attention/dissertation/retrogrades.png" rel="nofollow">https://invisible.college/attention/dissertation/retrogrades...</a><p>Astronomers had to account for this data with a complex set of retrograde motions and epicycles layered upon epicycles. These complexities only increased as telescopes and charting techniques improved, uncovering more distortions from in the idealized orbital lens. Take, for instance, the numerous parameterized gears required for an early Galilean planetary model: <a href="https://invisible.college/attention/dissertation/galileo2.jpg" rel="nofollow">https://invisible.college/attention/dissertation/galileo2.jp...</a><p>Only when Copernicus and Kepler put the <i>sun</i> in the center of the universe could the models be simplified. Suddenly, each planet's orbit fit a perfect elipse -- no epicycles, no retrograde motions.<p>We can do the same thing for Economic theory, by moving the center of the utility function from the <i>future</i> to the <i>present</i>. Right now, Economics models humans as optimizing future outcomes. The modeled humans are focused on the future: they allocate infinite attention to computing the optimal action for the future. But real humans have scarce attention for computing the future. When they run out of attention, these 194 heuristics and biases display themselves in full effect.<p>We solve this dilemma when we evaluate the utility function in the present, rather than the future. Instead of assuming humans have infinite attention, the utility function itself predicts <i>how</i> humans allocate their scarce attention. The new utility function evaluates the utility of attention itself.<p>And it turns out that we can empirically measure this value of this utility function, by running controlled experiments online with 1,000s of participants, and paying them different amounts of money to attend to different tasks. This lets us measure how much utility people ascribe to paying attention to television shows, sexy pictures, video games, advertisements, iPhone screens, or reddit posts. We can measure it in pennies per second.<p>This new model is a measurable <i>Attention Economics</i>: <a href="https://invisible.college/attention/dissertation.html" rel="nofollow">https://invisible.college/attention/dissertation.html</a>