This is such a strange article. The tldr is "Strong opinions weakly held doesn't work because people don't do it." Then the author makes some pretty strong opinions about how that's not how the brain works. I also believe that he's talking about two different ideas.<p>The first idea: having strong opinions but they aren't held strongly.<p>I have never read Saffo's work or even heard of these people here, but this tactic described here is something I use frequently and has been very successful for me as a scientist and is what many other scientists I know do. The key issue of his complaint is that people do not hold their ideas weakly enough.<p>Here's how I see it. Every opinion I have is just wrong. You cannot have all the data and all the relevant facts, so no matter what conclusion you make it is incomplete. The question is just "how wrong." If it is a little wrong, no worries, but if it is a lot wrong, big worries. The strength of your opinion should be proportional to this evidence. Essentially we're all making Fermi estimates, but as time goes on they become better and better. But that doesn't mean you aren't possibly missing some piece of key information. So you should always be open to changing your opinion. You can also hold an opinion strongly but change it with updating evidence. If you don't update your opinion to account for new evidence you are just a bad and stubborn scientist.<p>Second idea: method for developing good ideas<p>The second idea is about building up, tearing down, and repeating the process. Teams I've been on have used this method successfully to develop new theories and new products. You don't have to account for everything but it provides a good baseline. This model development really only is in the initial stages of development. You use it to figure out what to test and probe. Before you start a million dollar experiment you sure better have some good ideas and explanations for why you're doing what you're doing. This is essentially creating a red and blue team. You can do this as a group or you can do this individually (harder because you have to accept cognitive dissonance). This adversarial process can be highly successful in creating good conclusions (Hell, this is analogous to what a GAN does). The issues are when someone is really stubborn about their conclusions. But the big reason this works is because when you're submitting a proposal you've already answered basically any question anyone can ask of you. This is because a reviewer SHOULD be trying to find reasons to reject your proposal because you don't want to waste money.<p>So how this works in the real world is that I develop opinions based on the evidence that I have. I stick to my guns because I didn't form these opinions willy nilly, even with a lot of self doubt (focus on the adversarial benefit and it is okay to be wrong). (And the key part) When someone presents new and compelling evidence you update your model. But it is perfectly acceptable to determine if this evidence is irrelevant or an outlier. I do know this is hard for many people but it isn't that hard if you just accept the relativity of wrong [1] as a fundamental principle. In my undergrad studying experimental physics it is hounded into you to account for error of your measuring tools. The next logical conclusion is to try to account for the error from your most important measuring tool, you. If you accept that you aren't perfect, can't have perfect knowledge (i.e. "The map is not the territory"), this is not that hard. But then again, I'm considered weird, so I'm completely open to being wrong.<p>[0] <a href="https://en.wikipedia.org/wiki/Fermi_problem" rel="nofollow">https://en.wikipedia.org/wiki/Fermi_problem</a><p>[1] <a href="https://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm" rel="nofollow">https://chem.tufts.edu/AnswersInScience/RelativityofWrong.ht...</a>