I only half-way agree.<p>Science has a self-correcting mechanism which, when it works correctly, makes it a strong-link problem. The problem is that the self-correcting mechanism is fragile. Because it is fragile, keeping the self-correcting mechanism going is a weak-link problem.<p>Thus, for example, the fact that psychology never tried to replicate studies and trusted other studies meant that the self-correcting mechanism was broken. Making a bunch of psychologists try to replicate and then discover that a bunch of what they thought was science really wasn't, REALLY MATTERED.<p>But peer review to stop bad papers from being published is not part of the self-correcting mechanism. That intervention doesn't matter.<p>Fraud exists on a boundary. In the real world, fraudulent research helped keep billions flowing to investigating a bad theory about Alzheimer's. And helped the fraudulent researcher get promoted - all the way to President of Stanford University! At Stanford he has accelerated a destruction of both campus culture, and free speech standards, thinks to the same attitudes that set him along the path to being a fraudster in the first place. (Sadly, he's still there. And the cultural changes seem likely to ruin Stanford as key factor strengthening startup culture in Silicon Valley.) That's pretty strong evidence that fraud should be taken seriously, and isn't just an issue around the edges.<p>But most fraud doesn't come with such consequences. And if the self-correcting mechanisms of science are working well, fraud gets found and corrected without long-term damage.<p>So it is important to make sure that there are feedback loops to catch and eliminate fraud and fraudsters. We should actively protect the strength and value of those feedback loops. Because when they work, they police science and help make it into a strong-link problem.
I couldn't help to notice the parallels of weak and strong link problems to sexual evolution. Sexual dimorphism had the tendency to, at least in tournament species [1], evolve towards weak-link females which make up the risk-averse and "gatekeeping" (rejecting unfit males) reproductive basis of the population, and strong-link males with much higher variance in phenotype and mating success.<p>We humans don't have that much sexual dimorphism compared to many other species, but it has been suggested that human males exhibit higher variability, most prominently in cognitive abilities [2], i.e. more males with either high or low IQ. As always, as soon as something is related to human cognition, take it with a grain of salt, as our minds are very malleable. And, as always, there is a lot of controversy attached to that topic.<p>[1] <a href="https://en.wikipedia.org/wiki/Display_(zoology)#Tournament_species" rel="nofollow">https://en.wikipedia.org/wiki/Display_(zoology)#Tournament_s...</a>
[2] <a href="https://en.wikipedia.org/wiki/Variability_hypothesis" rel="nofollow">https://en.wikipedia.org/wiki/Variability_hypothesis</a>
> <i>These policies, like all forms of gatekeeping, are potentially terrific solutions for weak-link problems because they can stamp out the worst research. But they’re terrible solutions for strong-link problems because they can stamp out the best research, too. Reviewers are less likely to greenlight papers and grants if they’re novel, risky, or interdisciplinary. When you’re trying to solve a strong-link problem, this is like swallowing a big lump of kryptonite.</i><p>Scientist can publish whatever we like. Good research, bad research, whatever. Just open a blog like Tao <a href="https://terrytao.wordpress.com/" rel="nofollow">https://terrytao.wordpress.com/</a> There are no "gatekeepers".<p>One problem is how a committee can evaluate that. The bad solution we reached is to just count the number of papers in serious peer review journals. But now there are a los of predatory journals where you can just publish whatever you like if you pay, and other kinds of bad journals. So there are more complicated rules to define what is "serious" and what is a "journal".<p>I don't understand what the author propose. If all journals publish whatever people submit, they just become a clone of WordPress, but we already have WordPress and a few alternatives.<p>Other problem is how non-scientist can consume that. Which groundbreaking results should be copied to newspapers. Now the problem is that journalist in the science section has little scientific formation, so it's difficult to evaluate every single post in WordPress and decide which are good. So they use papers in serious peer review journals as a good approximation, but many times they just copy the bad press release of the university.<p>Another problem is which results should get applied to public policies. We expect that the person in charge is an expert and have experts advisors in each area. But it will be too much work for them to be reading all day WordPress to find promising results, and evaluate all of them. Also, when people in other areas can just copy-and-paste a result to use it instead of measuring it again.<p>My guess is that in that case there will develop a net of relialable curators, something like "awesome-volcanoes" or "awesome-cetacean". Each one is too much work for a single person, so they may ask for help from trusted friends. They may even give advice about stile and clarity, before the blog post is included in the awesome list. And now we have reinvented the peer review journal system, and the only difference is that the papers are never printed in dead trees.
I like the characterization and the premise but I completely disagree with the conclusion. To an academic, science may feel like a strong-link problem but from a layperson's pov it's the opposite. When bad science is translated into the real world, it turns into faulty engineering, bad medical interventions, terrible socioeconomic policies etc. The difficulty is that once bad science is out, it can take years/decades before it can be corrected. This is because science is thought to be our best way to get to an objective truth. This why bad science, when it goes out into the real world turns into "objective truth". Think about the "vaccines are bad for you" crap and the damage that one weak link that got out caused. Several of our modern problems are due to weak science making it out into the world and getting recognizing as "objective truth" because it came via scientific method.<p>The OP is right that in the long run science is self-correcting by its very nature. Over those timescales, half a generation or so only the good stuff stands the test of time. But over short timescales bad science making its way out into the world can wreak havok.
An interesting idea very well argued. I think the distinction between weak-link and strong-link problems is an important one. But there are two aspects where I’m not convinced:<p>1. Some parts of science actually are weak-link problems. For example, if you base your study on the conclusions of five published results and just one of them turns out to be fraudulent, then you’re screwed.<p>2. The reason that we often end up with weak-link culture and policy when strong-link ditto would be better is not a misdiagnosis of the problem, it’s that weak-link culture and policies benefit the great majority. Most people are (by definition) mediocre, and they benefit greatly from policies that obstruct the positive outliers, because they can’t compete with them without such policies. It’s sad, but it’s a part of human nature to try and set rules that benefits oneself.
Time, money, and attention are finite, yet output keeps growing. Hence the need for a lot of filtering. I wish there was an alternative, but I don't see any. Do journals reject quality work? Are geniuses passed up for promotion? Sure, but just giving some examples like Peter Higgs is hindsight bias.<p><i>e’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today. (Reminder, if you want to blame this on ideas getting harder to find, I will fight you.</i><p>This can probably also be explained by low hanging fruit having been picked
Isn’t scientific papers both a weak and strong link problem? If you have enough weak links you can now no longer trust papers at all so you need to scrutinise every paper. Which is like taking a swab and viol to your burger chain, run it in your lab, then order the wagyu special.<p>I think in science people are a strong link problem, papers are more likely a weak link problem. The gatekeeping of review, if the reviews are done correctly should make the papers better. A forcing function for no BS. Like a code review?
I think it is ridiculous to make such a sweeping statement about all of science when there is so much involved in each sub-field of science that brings its own problems to productivity and output.<p>Take for instance the fact that nobody in Europe seems to want to do a PhD anymore and nobody in America wants to do a post-doc (based on my experience speaking to colleagues looking for both things in the respective places). That means very often you have to settle for what you get and what you get is not necessarily the best people.<p>With regards to things being less disruptive, there are two issues here. One, the big papers that get published in i.e. Nature usually have extremely difficult techniques involved; something like low-temperature magnetic force microscopy that only a few research groups will have. This gatekeeps replication and further progression behind these few groups with these expensive piece of kit, so most people won't even care that it's happened. The other point I'd bring up is that a lot of research at the moment is not directly relevant to industry (so patents aren't quite as useful). Researchers are making absolutely insane devices to chase higher impact papers and patents that are so far ahead of where current industrial research is and wants to be that they're not going to adopt it easily at all.<p>All this is to say that more funding is great (always) but I don't buy what the author is saying about a strong-link problem at all. To me this just puts the idea of Nature = good science into people's heads and encourages bad research. The process of good science is incremental steps towards a shared goal, and lots of reports on the way there that tip off other people and help build a bigger picture.
i think this is true internally, bad papers more often than not get ignored within a scientific community. Even if fake findings cause other researchers to go down a rabbit hole pursuing a bad direction (as has happened apparently in biology and psychology multiple times, and probably every field) this doesn't matter all that much for society.<p>but when scientific research artifacts are used by people outside a scientific community, it is unfortunately a weak link problem. if there's one bad study that tells people what they want to hear, they will seek it out and find it.
Research funding and paper acceptance are not the same thing. We can do more risky research without accepting more low-quality papers. They are very different kinds of "bad".
This article is very insightful and applicable in many areas. For example, this paragraph explains why hiring is broken:<p>> Whether we realize it or not, we’re always making calls like this. Whenever we demand certificates, credentials, inspections, professionalism, standards, and regulations, we are saying: “this is a weak-link problem; we must prevent the bad!”<p>Small companies have an advantage when it comes to hiring since they often don't have a very formal process and can go about it strong-link style. Specially if the company is a startup pursuing product-market fit, going after people with outstanding skills can be very rewarding.<p>On the other hand, once the company grows there will be an HR department with a structured process. It operates in the weak-link style, which is aligned with the incentive structure, given that the HR personel will be remembered for bad hires, but probably won't get extra points for finding rough diamonds.
Looks like the observations is that "science is less disruptive than ever", and the proposed solution is to fund more random/weird research projects. But the former happens because all the low-hanging high-impact/low-cost research has been done already. You can't make a major discovery anymore using just your pulse and a couple of balls. So the historical analogies do not apply. The proposals to "ignore the worst", "don't gatekeep" will not work because the set of all possible research proposals is infinite, but money is not.
Science is a method nothing more, it's not truth it's only agreed upon consensus.<p>The problem is that consensus can often be purchased. History is littered with so called scientific consensus bought for a pretty penny.
I don't get the strong/weak link analogy.<p>Supposedly, you have a chain with links. The weakest link breaks first.<p>Now what kind of chain would have its strongest link break first?<p>Anyway, we might as well call this min/max problems.
Maybe a tangent but somehow typical for this paper:<p>>Imagine if you could only upload a song to Spotify after you got a degree in musicology, or memorized all the sharps in the key of A-sharp minor, or demonstrated competence with the oboe.<p>Considering the state of music IPR we are possibly nearing that point.
By ignoring the economics of science, the article overlooks the point that the number of good stuff and the number of worse stuff is correlated. If we neglect quality control, we are spending more money on worse (or average) stuff and less on good stuff.
interesting to frame problems as strong/weak link, is kinda useful for thinking about certain things.<p>- - -<p>citing Nobel laureate's take on "if I'm working today I won't..." is not too convinsing<p><pre><code> >In my day you could get a faculty job with 0 postdoc papers
</code></pre>
can't imagine how that would work today, the world changes why science as a social activity that has costs can be immune to the change?<p>Too bad while bad music and bad novels only take up some shelf space and make searching a little harder, science projects costs $$$ and so the real Q is how to distribute $, current approach is not that good and gatekeeps some good stuff out, but what's the alternative?
Science is also a weak-link problem. Wakefield's paper on MMR vaccine and autism is a demonstration of the damage low-end outliers in science can do.
Good read, though the food bits are trigger-warning worthy.<p>Science is about describing reality accurately, often describing an aspect of reality not easily and readily seen with the naked eye because it plays out over a long time or it's on a huge scale too large to view directly or a scale too tiny to see directly. It's about testing mental models to see what fits the facts and people want to be cautious about it because betting on an inaccurate model can have serious real world consequences.<p>Those consequences often cannot be undone.<p>I don't think it works to view it as one of these two models. I think science needs to both protect against quackery and also allow for innovation, even though those goals somewhat conflict.