TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why most published scientific research is probably false [video]

62 点作者 xijuan超过 11 年前

16 条评论

001sky超过 11 年前
Two game-theoretic strategies need to be mitigated&#x2F;bred out out of Academia:<p>(1)&#x27;Security through obscurity&#x27; problem, where nobody can be bothered to verify your results as they are likely meaningless, lack broad applicability, or are not intellectally cost-effective for anyone to be bothered to understand them (etc).<p>(2) The &quot;lick the cookie&quot; problem, where nobody will verify your results because there its considered degrading (professionally) to &#x27;not be first&#x27; at the table, as the author of origin. [a]<p>These both in combination lead to something of a &quot;tradgedy of the commons&quot; where the basic core of the discipline erodes in presitige&#x2F;utility, as the individual contributors seek to maximize their personal productivity from the public good (the repuation of groundbreaking science).<p>[a] This is the childhood strategy of making anything you touch first unatractive to all those who follow.<p>edits: for clarity.
评论 #6663141 未加载
评论 #6662282 未加载
评论 #6662170 未加载
leot超过 11 年前
The conclusions of this video depend on an idealized view (and thus a poor model) of research and science. In fact, there are many different kinds of results (associated with different levels of confidence and which almost all require a nuanced interpretation in order to be properly understood) and many different kinds of researchers. The best results, across many fields, are rarely if ever single papers with a single experiment with p&lt;0.05. The good ones have multiple mutually-confirming experiments with <i>much</i> smaller p-values. And often for the very best results, p-value-style analyses are redundant: what would be the p-value associated with the line that Hubel and Wiesel claim was triggering the firing of their cat&#x27;s retinal ganglion cell [<a href="https://www.youtube.com/watch?v=IOHayh06LJ4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=IOHayh06LJ4</a>]? Does it even matter?<p>[Edit: Parenthesis in first sentence, for clarity]
jamesaguilar超过 11 年前
Probably false? As in you have a better chance claiming the negation of a scientific paper&#x27;s conclusion than the actual conclusion? I doubt it.
评论 #6661869 未加载
评论 #6661870 未加载
评论 #6661863 未加载
评论 #6661855 未加载
snowwrestler超过 11 年前
It is a mistake of reasoning to take a meta-analysis of medical research and expand its conclusions to the rest of science.<p>Medical research has a number of peculiarities among the sciences, including the complexity of its subject (perhaps the highest of any discipline), the emotional reaction to the subject, the speed at which people try to turn scientific findings into products or advice, and the concommittal eagerness to trust epidemiological results without a known physical mechanism.<p>It&#x27;s also a huge mistake to get your science news and opinion from The Economist--a magazine with a great reputation that has nothing to do with its coverage of science
gabriel34超过 11 年前
Here is the link to the source, much friendlier to HN folk that rather read than watch: <a href="http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble" rel="nofollow">http:&#x2F;&#x2F;www.economist.com&#x2F;news&#x2F;briefing&#x2F;21588057-scientists-t...</a><p>EDIT: To add on to this, the source to the video does make an interesting observation about statistical methods being employed by scientists who doesn&#x27;t know their pitfalls.<p>Other thing I take from a more careful reading (as opposed to viewing a two minute highly superficial video) is that the article makes the assumption that all hypothesis will be subject to only one study, if we have three studies denying a certain hypothesis and one confirming it it&#x27;s pretty easy to catch the false positive on a literature review article (routinely done by people entering academia)
anigbrowl超过 11 年前
This is just an adjunct to this: <a href="https://news.ycombinator.com/item?id=6566915" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=6566915</a> (article and HN discussion).
bnegreve超过 11 年前
The claim is:<p><pre><code> &quot;most published scientific research is probably false&quot; </code></pre> and the evidence for that claim is:<p><pre><code> The number of false negative &quot;might easily be 4 in 10, or in some fields 8 in 10&quot; </code></pre> (quoted from the video)<p>This is rather weak.<p>And maybe more importantly: this assumes that researchers test random hypothesis uniformly drawn in the space of all possible hypothesis, which clearly isn&#x27;t the case.<p>Anyway, this can&#x27;t be really serious.
epistasis超过 11 年前
The vast majority of scientific papers are not single experiments with one p-vaule, but rather a handful experiments to a dozen or more experiments, only some of which may be reduced to a p-value. And in most biological research, at least two lines of evidence are required before a reviewer will accept a claim (e.g. &quot;OK, you may have found something, now verify it with a PCR.&quot;).<p>So this entire setup is just kind of crap, and not representative of scientific research.<p>In addition, this simple point, which is quite interesting, and necessary to keep in mind when interpreting multiple p-values, is widely acknowledged in the field, which is why False Discovery Rate methods started to be used as far back as the 90s. This initial point was first published as a &quot;The sky is falling, what are all you idiot medical researchers doing?!&quot; type of paper by Ioannidis, which is a great way to make a name for oneself. However, even his own interpretation did not hold up well, and he has stopped pushing the point. Summarizing an extensive comment on Metafilter [1]<p>&gt;Why Most Published Research Findings Are False: Problems in the Analysis &gt;The article published in PLoS Medicine by Ioannidis makes the dramatic claim in the title that “most published research claims are false,” and has received extensive attention as a result. The article does provide a useful reminder that the probability of hypotheses depends on much more than just the p-value, a point that has been made in the medical literature for at least four decades, and in the statistical literature for decades previous. This topic has renewed importance with the advent of the massive multiple testing often seen in genomics studies.Unfortunately, while we agree that there are more false claims than many would suspect—based both on poor study design, misinterpretation of p-values, and perhaps analytic manipulation—the mathematical argument in the PLoS Medicine paper underlying the “proof” of the title&#x27;s claim has a degree of circularity. As we show in detail in a separately published paper, Dr. Ioannidis utilizes a mathematical model that severely diminishes the evidential value of studies—even meta-analyses—such that none can produce more than modest evidence against the null hypothesis, and most are far weaker. This is why, in the offered “proof,” the only study types that achieve a posterior probability of 50% or more (large RCTs [randomized controlled trials] and meta-analysis of RCTs) are those to which a prior probability of 50% or more are assigned. So the model employed cannot be considered a proof that most published claims are untrue, but is rather a claim that no study or combination of studies can ever provide convincing evidence.<p>&gt;ASSESSING THE UNRELIABILITY OF THE MEDICAL LITERATURE: A RESPONSE TO &quot;WHY MOST PUBLISHED RESEARCH FINDINGS ARE FALSE&quot; &gt;A recent article in this journal (Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124) argued that more than half of published research findings in the medical literature are false. In this commentary, we examine the structure of that argument, and show that it has three basic components: &gt;1) An assumption that the prior probability of most hypotheses explored in medical research is below 50%. &gt;2) Dichotomization of P-values at the 0.05 level and introduction of a “bias” factor (produced by significance-seeking), the combination of which severely weakens the evidence provided by every design. &gt;3) Use of Bayes theorem to show that, in the face of weak evidence, hypotheses with low prior probabilities cannot have posterior probabilities over 50%. &gt;Thus, the claim is based on a priori assumptions that most tested hypotheses are likely to be false, and then the inferential model used makes it impossible for evidence from any study to overcome this handicap. We focus largely on step (2), explaining how the combination of dichotomization and “bias” dilutes experimental evidence, and showing how this dilution leads inevitably to the stated conclusion. We also demonstrate a fallacy in another important component of the argument –that papers in “hot” fields are more likely to produce false findings. We agree with the paper’s conclusions and recommendations that many medical research findings are less definitive than readers suspect, that P-values are widely misinterpreted, that bias of various forms is widespread, that multiple approaches are needed to prevent the literature from being systematically biased and the need for more data on the prevalence of false claims. But calculating the unreliability of the medical research literature, in whole or in part, requires more empirical evidence and different inferential models than were used. The claim that “most research findings are false for most research designs and for most fields” must be considered as yet unproven.<p>[1] <a href="http://www.metafilter.com/133102/There-is-no-cost-to-getting-things-wrong#5256675" rel="nofollow">http:&#x2F;&#x2F;www.metafilter.com&#x2F;133102&#x2F;There-is-no-cost-to-getting...</a>
评论 #6661942 未加载
pallandt超过 11 年前
A good opportunity for mentioning Benford&#x27;s Law: <a href="http://en.wikipedia.org/wiki/Benford%27s_law#Scientific_fraud_detection" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benford%27s_law#Scientific_frau...</a>
评论 #6661934 未加载
yetanotherphd超过 11 年前
The problem with their reasoning is that it relies on a very high prior that the hypothesis is false.<p>In fact an explicit analysis of the prior over the hypothesis and the power of the test, would be roughly equivalent to the informal discussion that goes along with the statistical results.<p>The main issues in my opinion are that the number and nature of studies that produce null results if unknown, and that there is a bias in the literature towards positive results. While this bias incentivizes researchers to use powerful tests, it comes at a big cost.
gabriel34超过 11 年前
IMO what is damaging to society is that there is no PR to science, so the press takes things that are not yet fully understood or verified by the academic community and publishes it as confirmed (studies say, confirm, etc.)<p>Groundbreaking results give much more press than their rebuttal, for example, see neutrinos faster than light or that arsenic consuming bacteria both of which were later dismissed in academic circles, but did not enjoyed the same treatment from the media.
评论 #6662597 未加载
评论 #6662445 未加载
officialjunk超过 11 年前
so... non-scientific articles about scientific research are true? probably not.
评论 #6661897 未加载
timr超过 11 年前
For me, the really remarkable thing about this graphic is that it doesn&#x27;t even support the headline: the number of false positives is a minority of total positives in the given example: 45 &#x2F; 125 = 36%
Gravityloss超过 11 年前
It&#x27;s just basic Bayesian reasoning. Most of the positive HIV test results are false positives. Even when the P value is less than 0.01 or so.
chasing超过 11 年前
Look. Clearly science makes progress that&#x27;s &quot;true&quot; in the sense that it becomes useful and can be used as functional models of the way things work.<p>This video&#x27;s using a very reductionist kind of statistics to point out that, yes, an individual piece of research making a claim might have a good chance of being wrong. Which is why science doesn&#x27;t say, &quot;Oh, well Larry just proved that Saturn orbits Uranus so let&#x27;s just never think about that again and instead move on to proving that the Sun is fully powered by excess heat radiating off of Elon Musk&#x27;s brain.&quot; Science is a process that works in aggregate, using a large volume of research and scientists checking one another as a way to smooth over this very imperfect process. Over time. Science checks itself. That&#x27;s the whole point. That&#x27;s why it reaches some pretty damned good conclusions about the way things work.<p>So.<p>I don&#x27;t know what the point of this video is. Science is wrong? Scientists are stupid? The Economist is smart? I should believe the Republicans when they say the Earth couldn&#x27;t possibly be warming because that one time it snowed in a part of Texas where it never really snows all that often?
评论 #6661916 未加载
评论 #6661957 未加载
lvs超过 11 年前
This is extremely misleading and feeds an anti-intellectual notion that scientists are just lying to everybody.<p>First, it perpetuates a common claim of those who don&#x27;t practice any sort of science: that the output of scientific studies is an enumeration of true&#x2F;false claims determined with statistical inference logic. (Science media&#x2F;blogs really aren&#x27;t helping on this one.)<p>Second, the math is just wrong: the space of hypotheses is infinite, so it&#x27;s impossible to say what fraction of these are true.
评论 #6662202 未加载
评论 #6661985 未加载