Interestingly, I find my favourite nitpick: Ockam's razor. The article quotes it as "The simplest solution is usually the correct one". This is a common misinterpretation of it and it's interesting that the quote links to the wikipedia page that has a better statement: "Among competing hypotheses, the one with the fewest assumptions should be selected."<p>The key problem is equating simplicity with correctness. This is usually disastrous. Once you feel that something is "correct" you stop looking for ways to falsify it. That's the exact opposite for what Occam's razor is used for.<p>Instead, if you have 2 competing hypotheses (two hypotheses for which the evidence supports both), you use the one with less assumptions. Partly because the one with less assumptions will be easier to work with and lead to models that are easier to understand. But mostly because less assumptions makes it easier to falsify.<p>Abusing this principle outside of the scientific method leads to all sorts of incredibly bad logic.
Two adjustments that I would make.<p>Remove Metcalfe's law. It is a massive overestimate. See <a href="http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf" rel="nofollow">http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf</a> for the better n log(n) rule for valuing a network.<p>And I find Le Châtelier's principle generally applicable, and not just to Chemistry. It says that if you observe a system at equilibrium, and try to induce a change, forces will arise that push it back towards the original equilibrium. It is one thing to recognize this at work in a chemical reaction. It is quite another to be blindsided by it in an organization.<p>See <a href="http://bentilly.blogspot.com/2010/05/le-chateliers-principle-not-just-for.html" rel="nofollow">http://bentilly.blogspot.com/2010/05/le-chateliers-principle...</a> for my explanation of why this holds in general outside of chemistry.
Ugh, maybe I'm the only one but I don't find this list useful. Not because it isn't interesting, but the implication that it will actually make you smarter. The problem today isn't information, it's knowledge. Even if you can correctly and fully understand all these models, something that could take years, you still most likely wouldn't be able to implement them, especially when they are in conflict with each other.<p>I think it's a much better idea to study things like critical thinking, practical reasoning and operational leadership. Back in the day hacker values stated that you could ask for directions, but not for the answer. Because the process itself was as important as the answer. Not just for amusement, but because there might not be a right answer and the next time you're confronted with a similar problem you now have some experience of making those decisions.<p>A great deal of "stupidity" in technology these days seem to stem from schools that promote check box answers to complex problems and the popularity of these "laws" that make people so sure of themselves that it prevents them from proper reasoning.
This is super useful, I have a similar list but it also includes techniques and ideas<p><pre><code> * Dimensionality Reducing Transforms
* Hysteresis, Feedback
* Transform, Op, Transform
* Orthogonalization for things that are actually dependent
* Ratios, remove units, make things dimensionless
</code></pre>
A big one, that helps me immensely, is that when I need to do a big/risky/complex task, is to imagine myself doing with with sped up time. Instantly creates an outline and list of tools that one will need.
Good list! A few suggested tweaks:<p>Veblen goods clearly exist, but the evidence for the existence of Giffen goods is much more suspect. (Did the poor really eat more bread because the price of bread rose, or because there was an across-the-board increase in the price of all kinds of food?)<p>The Precautionary Principle is not just dangerous or harmful, but guaranteed suicide; as things stand right now, we are all under a death sentence. It needs to be replaced by the Proactionary Principle, which recognizes that we need to keep making progress and putting on the brakes is something that needs to be justified by evidence.<p>Any list that has sections for both business and programming needs some entry for the very common fallacy that you can get more done by working more hours; in reality, you get less done in a sixty-hour week than a forty-hour one. (Maybe more in the first such week, but the balance goes negative after that.)<p>The distinction between fixed and growth mindset is well and good as far as it goes, but when we encourage the latter, we need to beware of the fallacious version that assumes we can conjure a market into existence by our own efforts. You can't become a movie star or an astronaut no matter how hard you try, not because you lack innate talent, but because the market for those jobs is much smaller than the number of people who want to do them.
A technique I often use to test a theory is to change the inputs to be the maximum and minimum possible values and see if the model still holds true. I've found it to be incredibly useful in a few specific situations.
I think pg also wrote an essay about a mental model that I find interesting: When in doubt, it's probably not about you.<p>There are many events that we usually think are related to us, but actually aren't, like your boss or customer being angry is in most cases not about you but something else.<p>I have looked through a lot of pg's essays but didn't find it. He probably removed it just that I can't find it (/example).<p>If someone else finds it, please link.
I'm surprised he rates cost-benefit analyses as a 2 ("occasionally" used) rather than a 1 ("frequently" used). Making good decisions almost always requires taking a hard look at both the costs and the benefits. It cannot be overstated how often bad decisions are made because the parties involved simply neglected to factor in the costs (including opportunity costs).<p>I personally use cost-benefit analyses for every non-trivial decision in my life.
Some commenters here are saying, "I already know this stuff." Indeed. I'd be curious if people could put out a list of "advanced" mental models. For example, Bayes' theorem is more advanced than Occam's razor.<p>What's clearly more advanced than Bayes' theorem, and as useful? ET Jaynes' flavor of probability theory? I'd posit the more advanced version of active listening as, "being able to perform a bunch of kinds of therapy--freudian, rogerian, family and systems etc." Of course I don't mean you go get a license for these things. I'm positing them as difficult, generally-applicable life skills. I'm not claiming these are good examples; I think HN can come up with better ones.
I'm just getting exposed to this line of thinking and find it fascinating. Another resource I found recently was <a href="https://www.farnamstreetblog.com/mental-models/" rel="nofollow">https://www.farnamstreetblog.com/mental-models/</a><p><i>Disclaimer: I'm not sure if it's derivative blogspam or legitimately insightful / original</i>
A nice metacognitive cheat sheet.<p>Missing a couple interrelated mental models I find very important:<p>- emergence: a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties<p>- decentralized system: a system in which lower level components operate on local information to accomplish global goals<p>- spontaneous order: the spontaneous emergence of order out of seeming chaos. The evolution of life on Earth, language, crystal structure, the Internet and a free market economy have all been proposed as examples of systems which evolved through spontaneous order.
Could someone give me a real example of somebody using mental models in a real world application? I just find the idea of learning and studying mental models to be distracting and confusing. Pardon my ignorance.
I still think Social Psychology was one of the most useful classes I ever took in college. Sure some if it is probably dated by now but the cognitive biases theories really helped me further in life.<p>I remember telling some class mates to take the class and they assumed it was for an easy A and not for how useful the class would be (and I went to a GaTech a long time ago and well the social sciences were just not respected like engineering disciplines at the time).
To the development section, I would add the concept of computational context/state, caching, and queue/event loop.<p>This HN comment summarizes it pretty nicely "everything in an OS is either a cache or a queue" <a href="https://news.ycombinator.com/item?id=11655472" rel="nofollow">https://news.ycombinator.com/item?id=11655472</a><p>Also Overton window
I have a similar list of useful concepts. My goal so far this year was to expose myself to those concepts as often as possible. I made an app for my phone that displays the concept of the day on my home screen (right now it's the rhetorical concept of periodic sentences). I also made images for each of the concepts that I use as my chromecast backdrop. I've seen each of them dozens of times by now, mostly unconsciously.<p>So far, mixed results. I would like to say that I think of "Bayes Theorem" at the perfect time because I wrote it on a list, but that never happens. I guess I've benefitted from thinking about these concepts more, but that's almost impossible to measure. A list of 100 useful mental models has limited value if you can't hold all of them in memory at once and retrieve them at the right time. I'm still trying to come up with a solution for this. Unfortunately I think this might be a fundamental limitation of human learning.
> <i>What am I missing?</i><p>In planning a strategy, I've found it helpful to consider <i>Win Conditions</i>. It forces me to think backwards from the goal, construct a dependency tree, and consider resource allocation. I first heard about it from videogames but I've also seen it in math, engineering, logistics, recipes, etc. I also pattern-match it the insight that solved the <i>Problem Of Points</i> [0] which motivated Probability Theory. If it were on the curated list, I'd expected to find it under "models" next to cost-benefit analysis.<p>[0] <a href="https://en.wikipedia.org/wiki/Problem_of_points#Pascal_and_Fermat" rel="nofollow">https://en.wikipedia.org/wiki/Problem_of_points#Pascal_and_F...</a>
On-going series on mental models at <a href="http://www.safalniveshak.com/category/mental-models/" rel="nofollow">http://www.safalniveshak.com/category/mental-models/</a>
His definition of a "strawman" is incomplete. It's not simply misrepresenting someone's argument, it's misrepresenting it specifically by analogizing it falsely to something similar that is easier to attack. The example he links to is a rather exaggerated strawman. I think most people would favor the strawman explanation in Wikipedia[1]<p>[1] <a href="https://en.wikipedia.org/wiki/Straw_man" rel="nofollow">https://en.wikipedia.org/wiki/Straw_man</a>
The wrong assumption about Ockams Razor is probably the cause of so many people re-inventing the wheel.<p>"I don't need this big framework, I can do with much less!"
This book is a very handy pocket reference that overlaps with many of the ideas mentioned here:<p><a href="https://www.amazon.com/Decision-Book-Models-Strategic-Thinking/dp/0393079619" rel="nofollow">https://www.amazon.com/Decision-Book-Models-Strategic-Thinki...</a>
This is the core of the book Peak: Secrets from the New Science of Expertise. It is a book of how to create a mental representation of what successful mental representations look like.<p>The most successful people, peak performers are those who have the best mental representations.
I would add to the list 'revealed preference'<p>'... an economic theory of consumption behavior which asserts that the best way to measure consumer preferences is to observe their purchasing behavior. Revealed preference theory works on the assumption that consumers have considered a set of alternatives before making a purchasing decision. Thus, given that a consumer chooses one option out of the set, this option must be the preferred option'
<a href="http://www.investopedia.com/terms/r/revealed-preference.asp" rel="nofollow">http://www.investopedia.com/terms/r/revealed-preference.asp</a><p>In other words "observe their actions, not their words"
Are these "mental models" or just a bunch of clichés / pithy aphorisms? To me, a mental model would be something more like "visualizing possible state transitions as a directed graph" or something like that.
Nice. They got Hick's law...that's one of my favorites, not so much in development, but sports. I train Brazilian jiu-jitsu, and I find substantial improvement in my reaction time by having only 2-3 well-worn options at my disposal (even 3 starts to feel crowded) in an given position, rather than a multitude of counters/attacks. When someone is trying to strangle you, go left or right is often a better choice than let's-check-the-mental-database-for-the-ultimate-move.
The mental model from economics that is widely misinterpreted is comparative advantage. Most think it means you/a country etc should specialize in that which you are best at. And then free trade will work to your advantage. But it actually means that even if you are worse at producing products A and B than another country, if your ratio of A/B is better than the other country, it would be good for you to produce A and trade it to the other country for B etc. I
A couple more:<p>Evolution<p>> Frequency-dependent selection: fitness of a phenotype depends on its frequency relative to other phenotypes<p>> Evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. Related to Nash Equilibrium and the Prisoners dilemma.<p>Economics<p>> Debasement (gold coins): lowering the intrinsic value by diluting it with an inferior metal.
Quite a few of these "mental models" are just a definition of terminology like "botnet". Come to think of it, the complete list is just definitions..
I would say "Divide and Conquer" should be a 0... it is that useful and it can be applied to many many different categories.<p>So many things seem intractable and formidable in complexity yet once these things are broken down into pieces things become clear. The Asana CEO once talked about this. Breaking things out provides clarity and once you have clarity productivity is massively increased.
If you enjoy these sort of summaries, I encourage you to check out the book "Seeking Wisdom" by Peter Bevelin <a href="https://www.amazon.com/Seeking-Wisdom-Darwin-Munger-3rd/dp/1578644283" rel="nofollow">https://www.amazon.com/Seeking-Wisdom-Darwin-Munger-3rd/dp/1...</a>
There is also an app from the apple app store that has most of these mental models in book form: <a href="https://itunes.apple.com/us/book/think-mental-models/id612365806?mt=11" rel="nofollow">https://itunes.apple.com/us/book/think-mental-models/id61236...</a>
I thoroughly enjoyed the book Inside The Box, which presents four mental models for creative problem solving. The core idea that creating rules can help creativity is a pattern toward which I think most technical people (including myself) feel averse, but actually can be beneficial when studied with an open mind.
I recurrently use: <i>Everything is a</i>... [1]<p>Even when this model doesn't explain 100% of occurrences is great as a starting point of view to understand the main pattern of a complex system.<p>[1] - <a href="http://c2.com/cgi/wiki?EverythingIsa" rel="nofollow">http://c2.com/cgi/wiki?EverythingIsa</a>
Perfect, but how do you use these models?<p>Are you supposed to know all 100's of them by heart and then, in the middle of conversation, go: "Ah, but X principle says Y, therefore we will go with Z option". Is it?
Am I missing something?<p>I mean, I'd love to use this but I don't have enough brain cells for all of those :)
I'd add Amdahl's Law [1], which is about the relationship between adding resources for executing a task, and the speed-up that delivers.<p>[1] <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Amdahl%27s_law</a>
It's an interesting list. Though I'm a bit baffled at why he has Power-law as a "1" (comes up frequently) and Heavy-tailed distribution as a "3" (rarely comes up). A power law <i>is</i> a heavy-tailed distribution!
Very underwhelming, I'm actually quite surprised that most people seem to find this useful and interesting. I mean, normal distribution, Moore's law, minimum viable product, paradox of choice... that's pretty basic stuff.
Along with the reference to Arrow's Impossibility Theorem, I'd want a reference to the fact that voting can be done in ways other than ranking, e.g. approval or score voting.<p>Overall, a superb list.
Nice list! I really miss this one:<p><a href="https://en.wikipedia.org/wiki/Reductio_ad_absurdum" rel="nofollow">https://en.wikipedia.org/wiki/Reductio_ad_absurdum</a>
Nothing ground breaking here - I imagine most readers here already use most of the author's models - but this is a nice comprehensive list, which I have not seen before.
Interestingly, that's about 75% of my 2-year MBA.<p>Sure, it's far different doing daily training to get those concepts ingrained in your mind so you don't have to actively think about them, but it's nice to see them listed like this.<p>Here are a couple more:<p>- Overconfidence bias: we usually think we're better than the average on something we know how to do (driving) and worse than the average in something we don't (juggling), even if almost nobody knows juggling and everyone knows how to drive<p>- No alpha (aka can't beat the market): you can only consistently beat the market if you're far better at financial analysis than a lot of people who do it every day all day. So don't bother trying.<p>- Value chain vs. profits: you'll find that most of the excess profits in the value chain of a product will be concentrated in the link that has the least competition<p>- Non-linearity of utility functions: the utility of item n of something is smaller than item n-1. Also, the utility of losing $1 is smaller than (1/1000) utility of losing 1000. This explains insurance and lotteries: using linear utility function, both have a negative payout, but they make sense when the utility function isn't linear<p>- Bullwhip effect in supply chain: a small variation in one link of the supply chain can cause massive impacts further up or down as those responsible for each link overreact to the variation (also explains a lot of traffic jams)<p>- Little's law: in supply chain (and a lot of other fields): number of units in a system = arrival rate * time in the system<p>I'll add more as I think about them.
If the 29 minute read time is intimidating, consider this link: <a href="https://www.farnamstreetblog.com/mental-models/" rel="nofollow">https://www.farnamstreetblog.com/mental-models/</a><p>All the information, easier to read quickly.
Such cynical words, besides depriving the world of a much needed listicle, will also get us downvoted. Please don't offer such awkward comments which might cause people to pause and think. Now back to my facebook feed..