TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How do we clean up the scientific record?

42 点作者 wjb3超过 1 年前

7 条评论

omgJustTest超过 1 年前
As with many problems this is a misalignment of incentives.<p>Quantity of papers is #1 and citations is a distant #2. People will outright tell you quality is rarely the goal and that unique results face tough publication odds.<p>Peter Higgs recently mentioned that his department looked to fire him before his Nobel prize, and his lack of interest in filling the metrics.<p>This is just an optimization problem that has favored &quot;exploit&quot; for many years, even when &quot;explore&quot; is the direct mission statement of many of these institutions.
评论 #37922569 未加载
评论 #37923500 未加载
lusus_naturae超过 1 年前
A reproducibility index instead of an h-index would help I think, and having an independent body which holds research institutions accountable for a having low collective reproducibility index (per field). Funding should be tied to the reproducibility index.<p>Edit to add: some research is so niche that only one or two groups may be able to work on it due to equipment, resources or training&#x2F;talent—in this case the independent body should be allowed to audit the results of such labs. Maybe scientists may game the index by saying &quot;my competitor&#x27;s results cannot be reproduced&quot;, in which the lab who this is claimed against may file a petition to the independent body to resolve the matter.
评论 #37923528 未加载
评论 #37922530 未加载
adolph超过 1 年前
A proposal of Balaji Srinivasan:<p><i>able to interrogate the information supply chain . . . . You see it in the media and that comes from, a government study, that&#x27;s based on academic study, that&#x27;s based on a data set. . . . have a actual academic supply chain where you can trace the etiology, the origin of a fact or an assertion all the way through the literature. And there&#x27;s a famous paper that actually did track something like this all the way through the literature and found, it was just something that just got repeated with some like medical nostrum that actually didn&#x27;t really have that base of evidentiary support where when you track all the back, you couldn&#x27;t find the brought table.</i><p><a href="https:&#x2F;&#x2F;podclips.com&#x2F;ct&#x2F;zgkzdp" rel="nofollow noreferrer">https:&#x2F;&#x2F;podclips.com&#x2F;ct&#x2F;zgkzdp</a>
评论 #37922735 未加载
kurthr超过 1 年前
It&#x27;s interesting that this came out the same week as Derek Lowe was discussing the problem with some of the most easily discovered scientific frauds (crystal structures).<p><a href="https:&#x2F;&#x2F;www.science.org&#x2F;content&#x2F;blog-post&#x2F;faked-crystals-and-faked-data" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.science.org&#x2F;content&#x2F;blog-post&#x2F;faked-crystals-and...</a><p>Since many of the journals (much less authors) don&#x27;t seem to have an incentive to correct the record, perhaps journals need to be banned, if they can&#x27;t police obvious frauds.
评论 #37922818 未加载
refurb超过 1 年前
We don&#x27;t need to clean up the scientific record, it&#x27;s self-cleaning, at least to the people who rely on it.<p>When I working in R&amp;D and a paper was published with an unexpected result, the automatic response was to look at the lab who published it. Reputation matters and some labs were rock solid, and others, shall we say, &quot;played lose and fast with the data&quot;.<p>And even if it was published by a reputable lab, it was assumed it was probably an anamoly, or some measurement error, or some other problem <i>until other reputable labs reproduced it</i>.<p>So I guess the solution is for scientists to <i>stand fast with their skepticism</i>. My PI told his students &quot;your job is to punch holes in other people&#x27;s work&quot;. That was always encouraged. Question everything - and if the scientist hasn&#x27;t shown they already checked that - assume the results are bullshit.<p>Only after rigorous examination and repeated validation should we believe any science. Not because scientists are making things up or lying (though some are), but because even the most honest scientist sometimes misses something or makes mistakes.
beefman超过 1 年前
It&#x27;s almost like these folks haven&#x27;t heard of a wiki. The term doesn&#x27;t appear in the primary reference on possible solutions.[9]<p>&gt; classical model tested over millennia<p>Peer review as standard practice is less than a century old. Scientific journals are less than 400 years old.<p>[9] <a href="https:&#x2F;&#x2F;link.springer.com&#x2F;article&#x2F;10.1007&#x2F;s10838-022-09607-4" rel="nofollow noreferrer">https:&#x2F;&#x2F;link.springer.com&#x2F;article&#x2F;10.1007&#x2F;s10838-022-09607-4</a>
评论 #37927169 未加载
alexfromapex超过 1 年前
At this point, we need AI to help identify the flawed research and then there should be a notice on the flagged research which would require reproducibility to remove.