This website is merely ok. Summary: 5/5 in science communication; educators and scientists should learn from its brilliant innovations. 2/5 in psychology, with deep knowledge but severe errors and misunderstandings.<p>Here's a full list of every page I saw (no selection bias!) and my evaluations.<p>Anchoring heuristic: this category is not accurate. Anchoring takes two forms: as a priming effect, or as insufficient adjustment from a starting point. Instead, this category lists "Confirmation bias, Echo chamber, Effort justification bias, Escalation of commitment bias, Hindsight bias, Illusion of transparency, Self-fulfilling prophecy". None of these relate to anchoring either directly or through mechanisms. The omission of anchoring, a major effect, is glaring.<p>Automation bias: the effect as described in the article is not even correct. There is an automation bias, but there is also the opposite anti-automation bias, where humans unfairly disregard the opinions of machines in some contexts, such as algorithmic recipe recommendations. Their CDS example is poorly chosen and barely illustrates the subject. There is a better example from the literature, where humans accept the results of a blatantly wrong calculator over their own estimates. In addition, automation bias is frequently justified even when compared to human "rational" thinking, as summarized in Thinking, Fast and Slow, Ch 21.<p>I recall this quote from Gelman: "Duncan notes that many common sayings contradict each other. For example, The early bird catches the worm, but The early worm is eaten by the bird." This page on automation bias is no better than one of two contradictory sayings.<p>I like the three meters, of literature, impact, and replication. Their existence is a well-thought and marvelous insight on science communication. It's a giant improvement over resources from 10 years ago, when such meters were barely considered by psychologists, much less communicators.<p>I like that they cited references to research papers. I like that they describe how the experiments measure the effect. These are major advantages over comparable websites.<p>Representativeness heuristic: this category is not accurate. Base rate and conjunction fit. The rest do not.<p>Base rate neglect: the explanation is quite bad. Kahneman's theory is much more careful and requires huge contortion in "just-so" stories to fit experiments, about how statistical base rates are not always statistical. (I believe his contortions are correct.) But this page doesn't even attempt to describe what qualifies as a "base rate".<p>The Trump example is a loose fit and the test example is so vague as to be meaningless. The examples in the literature, with criminal identification and test positivity rates, have better writing.<p>Conjunction fallacy: don't pick that famous Linda example if you have to explain all its linguistic caveats. Your explanations are not convincing, just assertions which the reader can't tell the truth of. (And there are better explanations, like replication under single evaluation vs joint evaluation, or replications with clarifying language, which are not given.) Nevertheless, this page is broadly correct. It is over-specific, focusing on the direct violation of conjunction rather than the representativeness process that underlies the probability estimation, but perhaps the specificity is justified if one wants to hew to the literature.<p>About->Our team: oh my god, the authors are psychology PhD students. I think psychology needs more academic interest from students, so that graduate programs can have stricter filtering. This website does not give hope that psychology will cast off its bad reputation soon.