TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

P-hacked hypotheses are deceivingly robust (2016)

41 点作者 soundsop大约 5 年前

5 条评论

eganist大约 5 年前
The words &quot;deceivingly&quot; and &quot;deceptively&quot; have the same problem: there&#x27;s a roughly 50&#x2F;50 split in polar-opposite interpretations. <a href="https:&#x2F;&#x2F;grammarist.com&#x2F;usage&#x2F;deceptively&#x2F;" rel="nofollow">https:&#x2F;&#x2F;grammarist.com&#x2F;usage&#x2F;deceptively&#x2F;</a><p>In this case, does &quot;deceivingly robust&quot; mean they look robust but are fragile? or does it instead mean they look fragile but are robust?<p>This isn&#x27;t a criticism of you, soundsop. Rather, it&#x27;s intended to keep pointing at how difficult it can be to concisely deliver a message.<p>---<p>edit: sounds like the correct interpretation of the title is <i>&quot;P-hacked hypotheses appear more robust than they are.&quot;</i>
评论 #22389793 未加载
评论 #22387905 未加载
评论 #22387938 未加载
评论 #22393611 未加载
bsder大约 5 年前
Basically, if you take a p-hacked hypothesis and attempt to use it <i>predictively</i>, it falls apart.<p>That&#x27;s kinda ... useful, actually.<p>It feels like this is sort of the same issue with overfitting in ML. Attempts to use ML results predictively often fail in hilarious ways.
评论 #22388646 未加载
ncmncm大约 5 年前
P-hacking is a fine way to winnow through ideas to see what might be interesting to follow up on. There will certainly be false positives, but the real positives will usually be in there, too, if there are any. Determining which is which takes more work, but you need guidance on where to apply that work.<p>To insist that p-hacking, by itself, implies pseudo-science is fetishism. There is no substitute for understanding what you are doing and why.
bjterry大约 5 年前
&gt; Direct replications, testing the same prediction in new studies, are often not feasible with observational data. In experimental psychology it is common to instead run conceptual replications, examining new hypotheses based on the same underlying theory. We should do more of this in non-experimental work. One big advantage is that with rich data sets we can often run conceptual replications on the same data.<p>I think actually relying on &quot;conceptual replications&quot; in practice is impossible. If the theory is only coincidentally supported by the data, that makes the replication more likely to exceed p &lt; .05 coincidentally in a very difficult to analyze way.<p>The author mentions that problem, but doesn&#x27;t mention a bigger issue: If you think people are unlikely to publish replications using novel data sets, just imagine how impossibly unlikely it is for people to publish failed replications with the original data set! If you read a &quot;replicated&quot; finding of the same theory using the same data set, you can safely ignore it, because 19 other people probably tried other related &quot;replications&quot; and didn&#x27;t get them to work.
lisper大约 5 年前
This problem is going to get more severe as available datasets get bigger and bigger. The more data you have to mine, the more likely you are to find something that looks like a signal but isn&#x27;t.
评论 #22389942 未加载