There is at least one thing wrong about this. This is an essay about a paper published a simulation based scenarios in medical research. It then try to generalize to "research" and avoid this very narrow support to the claim. I think this is something true and it should make us more cautious when deciding based on single studies. But things are different in other fields.<p>Also this is called research. You don't know the answer before head. You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process. Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there. I would like to suggest a great book called "How experiments end"[1] and enjoy going into details on how scientific conscious is built for many experiments in different fields (mostly physics).<p>[1] <a href="https://press.uchicago.edu/ucp/books/book/chicago/H/bo5969426.html" rel="nofollow">https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...</a>
This paper, almost 20 years old, has plenty of follow-up work showing the claims in this original paper aren’t true.<p>One simple angle is Ioannidis simply makes up some parameters to show things could be bad. Later empirical work measuring those parameters found Ioannidis off by orders of magnitude.<p>One example <a href="https://arxiv.org/abs/1301.3718" rel="nofollow">https://arxiv.org/abs/1301.3718</a><p>There’s ample other published papers showing other holes in the claims.<p><a href="https://scholar.google.com/scholar?cites=15681017780418799273&as_sdt=800005&sciodt=0,15&hl=en" rel="nofollow">https://scholar.google.com/scholar?cites=1568101778041879927...</a><p>Google scholar papers citing this
> In this framework, a research finding is less likely to be true [...] where there is greater flexibility in designs, definitions, outcomes, and analytical modes<p>It's worth noting though that in many research fields, teasing out the correct hypotheses and all affecting factors are difficult. And, sometimes it takes quite a few studies before the right definitions are even found; definitions which are a prerequisite to make a useful hypothesis. Thus, one cannot ignore the usefulness of approximation in scientific experiments, not only to the truth, but to the right questions to ask.<p>Not saying that all biases are inherent in the study of sciences, but the paper cited seems to take it for granted that a lot of science is still groping around in the dark, and to expect well-defined studies every time is simply unreasonable.
Related. Others?<p><i>Why most published research findings are false (2005)</i> - <a href="https://news.ycombinator.com/item?id=37520930">https://news.ycombinator.com/item?id=37520930</a> - Sept 2023 (2 comments)<p><i>Why most published research findings are false (2005)</i> - <a href="https://news.ycombinator.com/item?id=33265439">https://news.ycombinator.com/item?id=33265439</a> - Oct 2022 (80 comments)<p><i>Why Most Published Research Findings Are False (2005)</i> - <a href="https://news.ycombinator.com/item?id=18106679">https://news.ycombinator.com/item?id=18106679</a> - Sept 2018 (40 comments)<p><i>Why Most Published Research Findings Are False</i> - <a href="https://news.ycombinator.com/item?id=8340405">https://news.ycombinator.com/item?id=8340405</a> - Sept 2014 (2 comments)<p><i>Why Most Published Research Findings Are False</i> - <a href="https://news.ycombinator.com/item?id=1825007">https://news.ycombinator.com/item?id=1825007</a> - Oct 2010 (40 comments)<p><i>Why Most Published Research Findings Are False (2005)</i> - <a href="https://news.ycombinator.com/item?id=833879">https://news.ycombinator.com/item?id=833879</a> - Sept 2009 (2 comments)
As I’ve transitioned to more exploratory and researchy roles in my career, I have started to understand the science fraudsters like Jan Hendrik Schön.<p>When you spent an entire week working on a test or experiment that you <i>know</i> should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we <i>don’t</i> actually know what the results will be, but that’s sometimes more obvious in hindsight.<p>Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.
Something that continues to puzzle me: how do molecular biologists manage to come up with such mindbogglingly complex diagrams of metabolic pathways in the midst of a replication crisis? Is our understanding of biology just a giant house of cards or is there something about the topic that allows for more robust investigation?
This kind of report always raises the question for me of what the existing system's goals are. I think people assume that "new, reliable knowledge" is among the goals, but I don't see that the incentives align toward that goal, so I don't know that that's actually among them.<p>Does the world really want/need such a system? (The answer seems obvious to me, but not above question.) If so, how could it be designed? What incentives would it need? What conflicting interests would need to be disincentivized?<p>I think it's been pretty evident for a long time that the "peer-reviewed publications system" doesn't produce the results people think it should. I just don't hear anybody really thinking through the systems involved to try to invent one that would.
One study tried to replicate 100 psychology studies and only 36% attained significance.<p><a href="https://osf.io/ezcuj/wiki/home/" rel="nofollow">https://osf.io/ezcuj/wiki/home/</a>
Please note the peerpub comments discussing that it appears that followup research shows about 15% is wrong, not the 5% anticipated.<p><a href="https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9AF9" rel="nofollow">https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...</a>
It’s a matter of incentives. Everyone who wants a PhD has to publish and before that they need to produce findings that align with the values of their professors. These bad incentives combined with rampant statistical errors lead to bad findings. We need to stop putting “studies” on a pedestal.
i wonder if science could benefit from publishing using pseudonyms the way software has. if it's any good, people will use it, the reputations will be made by the quality of contributions alone, it makes fraud expensive and mostly not worth it, etc.
How broad a range is this result supposed to cover? It seems to be mostly applicable to areas where data is too close to the noise threshold. Some phenomena are like that, and some are not.<p><i>"If your experiment needs statistics, you ought to have done a better experiment"</i> - Rutherford
This published research is false.<p>All published research will turn out to be false.<p>The problem is ill-posed: can we establish once and for all that something is true? Almost all history had this ambition, yet every day we find that something we believed to be true wasn't. Data isn't encouraging.
I've implemented several things from computer science papers in my career now, mostly related to database stuff. They are mostly terribly wrong or show the exact OPPOSITE as to what they claim in the paper. It's so frustrating. Even occasionally, they offer their code used to write the paper and it is missing entire features they claim are integral for it to function properly; to the point that I wonder how they even came up with the results they came up with.<p>My favorite example was a huge paper that was almost entirely mathematics-based. It wasn't until you implemented everything that you would realize it just didn't even make any sense. Then, when you read between the lines, you even saw their acknowledgement of that fact in the conclusion. Clever dude.<p>Anyway, I have very little faith in academic papers; at least when it comes to computer science. Of all the things out there, it is just code. It isn't hard to write and verify what you purport (usually takes less than a week to write the code), so I have no idea what the peer reviews actually do. As a peer in the industry, I would reject so many papers by this point.<p>And don't even get me started on when I send the (now professor) questions via email to see if I just implemented it wrong, or whatever, that just never fucking reply.
It has been said that "Publish or Perish" would make a good tomb stone epitaph for a lot of modern sciences.<p>I speak to a lot of people in various science fields and generally they are some of the heaviest drinkers I know simply because of the system they have been forced into. They want to do good but are railroaded into this nonsense for dear of losing their livelihood.<p>Like those that are trying to progress our treatment of mental health but have ended up almost exclusively in the biochemicals space because that is where the money is even though that is not the only path. It is a real shame.<p>Also other heavy drinkers are the ecologists and climatologists, for good reason. They can see the road ahead and it is bleak. They hope they are wrong.
I only read the abstract; “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.”<p>True vs false seems like a very crude metric, no?<p>Perhaps this paper’s research claim is also false.
From my experience, my main criticism of research in the field of computer vision is that most of it is 'meh'. In a university that focused on security research, I saw mountains of research into detection/recognition, yet most of it offered no more than slightly different ways of doing the same old thing.<p>I also saw: a head of design school insisting that they and their spouse were credited on all student and staff movies, the same person insisting that massive amounts of school cash be spent promoting their solo exhibition that no one other than students attended, a chair of research who insisted they were given an authorship role on all published output in the school, labs being instituted and teaching hires brought in to support a senior admin's research interested (despite them not having any published output in this area), research ideas stolen from undergrad students and given to PhD students... I could go on all day.<p>If anyone is interested in how things got like this, you might start with Margret Thatcher. It was she who was the first to insist that funding of universities be tied to research. Given the state of British research in those days it was a reasonable decision, but it produced a climate where quantity is valued over quality and true 'impact'.
I think unpopular to mention here but John Ioannidis did a really weird turn in his career and published some atrociously non-rigorous Covid research that falls squarely in the cross-hairs of "why...research findings are false".
Imagine if tech billionaires, instead of building dickships and buying single-family homes, decided to truly invest in humanity by realigning incentives in science.
On a livestream the other day, Stephan Wolfram said he stopped publishing through academic journals in the 1980's because he found it far more efficient to just put stuff online. (And his blog is incredible: <a href="https://writings.stephenwolfram.com/all-by-date/" rel="nofollow">https://writings.stephenwolfram.com/all-by-date/</a>)<p>A genius who figured it academic publishing had gone to shit decades ahead of everyone else.<p>P.S. We built the future of academic publishing, and it's an order of magnitude better than anything else out there.
This must be a satire piece.<p>It talks on things like power, reproducibility, etc. Which is fine. There are minority of papers with mathematical errors. What it fails to examine is what is "false". Their results may be valid for what they studied. Future studies may have new and different findings. You may have studies that seem to conflict with each other due to differences in definitions (eg what constitutes a "child", 12yo or 24yo?) or the nuance in perspective apllied to the policies they are investigating (eg aggregate vs adjusted gender wage gap).<p>It's about how you use them - "Research <i>suggests</i>..." or "We recommend further studies of larger size", etc. It's a tautology that if you misapply them they will be false a majority of the time.
This is a classic and important paper in the field of metascience. There are other great papers predating this one, but this one is widely known.<p>Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.