TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Replace peer review with “peer replication” (2021)

583 pointsby dongpingalmost 2 years ago

61 comments

fabian2kalmost 2 years ago
I don&#x27;t see how this could ever work, and non-scientists seem to often dramatically underestimate the amount of work it would be to replicate every published paper.<p>This of course depends a lot on the specific field, but it can easily be months of effort to replicate a paper. You save some time compared to the original as you don&#x27;t have to repeat the dead ends and you might receive some samples and can skip parts of the preparation that way. But properly replicating a paper will still be a lot of effort, especially when there are any issues and it doesn&#x27;t work on the first try. Then you have to troubleshoot your experiments and make sure that no mistakes were made. That can add a lot of time to the process.<p>This is also all work that doesn&#x27;t benefit the scientists replicating the paper. It only costs them money and time.<p>If someone cares enough about the work to build on it, they will replicate it anyway. And in that case they have a good incentive to spend the effort. If that works this will indirectly support the original paper even if the following papers don&#x27;t specifically replicate the original results. Though this part is much more problematic if the following experiments fail, then this will likely remain entirely unpublished. But the solution here unfortunately isn&#x27;t as simple as just publishing negative results, it take far more work to create a solid negative result than just trying the experiments and abandoning them if they&#x27;re not promising.
评论 #37023255 未加载
评论 #37025220 未加载
评论 #37025937 未加载
评论 #37022981 未加载
评论 #37026912 未加载
评论 #37023365 未加载
评论 #37025206 未加载
评论 #37023808 未加载
评论 #37026939 未加载
评论 #37022948 未加载
评论 #37028055 未加载
评论 #37023312 未加载
评论 #37027372 未加载
评论 #37034341 未加载
评论 #37023009 未加载
评论 #37025323 未加载
评论 #37025222 未加载
评论 #37029844 未加载
评论 #37024570 未加载
评论 #37023624 未加载
评论 #37022859 未加载
评论 #37027217 未加载
评论 #37024811 未加载
评论 #37028021 未加载
评论 #37023847 未加载
评论 #37032614 未加载
评论 #37027005 未加载
评论 #37026970 未加载
评论 #37030464 未加载
评论 #37027724 未加载
评论 #37028292 未加载
评论 #37033325 未加载
评论 #37025611 未加载
matthewdgreenalmost 2 years ago
The purpose of science publications is to share new results with other scientists, so others can build on or verify the correctness of the work. There has always been an element of “receiving credit” to this, but the communication aspect is what actually matters <i>from the perspective of maximizing scientific progress.</i><p>In the distant past, publication was an informal process that mostly involved mailing around letters, or for a major result, self-publishing a book. Eventually publishers began to devise formal journals for this purpose, and some of those journals began to receive more submissions than it was feasible to publish or verify just by reputation. Some of the more popular journals hit upon the idea of applying basic editorial standards to reject badly-written papers and obvious spam. Since the journal editors weren’t experts in all fields of science, they asked for volunteers to help with this process. That’s what peer review is.<p>Eventually bureaucrats (inside and largely outside of the scientific community) demanded a technique for measuring the productivity of a scientist, so they could allocate budgets or promotions. They hit on the idea of using publications in a few prestigious journals as a metric, which turned a useful process (sharing results with other scientists) into [from an outsider perspective] a process of receiving “academic points”, where the publication of a result appears to be the end-goal and not just an intermediate point in the validation of a result.<p>Still other outsiders, who misunderstand the entire process, are upset that intermediate results are sometimes incorrect. This confuses them, and they’re angry that the process sometimes assigns “points” to people who they perceive as undeserving. So instead of simply accepting that <i>sharing results widely to maximize the chance of verification</i> is the whole point of the publication process, or coming up with a better set of promotion metrics, they want to gum up the essential sharing process to make it much less efficient and reduce the fan-out degree and rate of publication. This whole mess seems like it could be handled a lot more intelligently.
评论 #37025157 未加载
评论 #37025375 未加载
评论 #37023562 未加载
评论 #37023447 未加载
migaalmost 2 years ago
Peer review does not serve to assure replication, but assure readability and comprehensibility of the paper.<p>Given that some experiments cost billions to conduct, it is impossible to implement &quot;Peer Replication&quot; for all papers.<p>What could be done is to add metadata about papers that were replicated.
评论 #37022118 未加载
评论 #37022768 未加载
评论 #37024566 未加载
评论 #37023046 未加载
评论 #37023649 未加载
评论 #37022341 未加载
janalsncmalmost 2 years ago
For a while Reddit had the mantra “pics or it didn’t happen”.<p>At least in CS&#x2F;ML there needs to be a “code or it didn’t happen”. Why? Papers are ambiguous. Even if they have mathematical formulas, not all components are defined.<p>Peer replication in these fields is an easy low hanging fruit that could set an example for other fields of science.
评论 #37025896 未加载
评论 #37030492 未加载
infogulchalmost 2 years ago
I like the idea of splitting &quot;peer review&quot; into two, and then having a citation threshold standard where a field agrees that a paper should be replicated after a certain number of citations. And journals should have a dedicated section for attempted replications.<p>1. Rebrand peer review as a &quot;readability review&quot; which is what reviewers tend to focus on today.<p>2. A &quot;replicability statement&quot;, a separately published document where reviewers push authors to go into detail about the methodology and strategy used to perform the experiments, including specifics that someone outside of their specialty may not know. Credit NalNezumi ITT
评论 #37024204 未加载
NalNezumialmost 2 years ago
Imo, A more realistic thing to do is &quot;replicability review&quot; and&#x2F;or requirement to submit &quot;methodology map&quot; to each paper.<p>The former would be a back and forth between a reviewer that inquire and ask questions (based on the paper) with the goal to <i>reproduce the result</i>, but don&#x27;t have to actually reproduce it. This is usually good to find out missing details in the paper that the writer just took for granted everyone in the field knows (I&#x27;ve met Bio PHD that have wasted Months of their life tracking up experimental details not mentioned in a paper)<p>The latter would be the result of the former. Instead of having pages long &quot;appendix&quot; section in the main paper, you produce another document with meticulous details of the experiment&#x2F;methodology with every stone turned together with an peer reviewer. Stamp it with the peer reviewes name so they can&#x27;t get away with hand wavy review.<p>I&#x27;ve read too many papers where important information to reproduce the result is omitted. (for ML&#x2F;RL) If the code is included I&#x27;ve countless of times found implementation details that is not mentioned in the paper. In matter of fact, there&#x27;s even results suggesting that those details are the make or break of certain algorithms. [1] I&#x27;ve also seen breaking details only mentioned in code comments...<p>Another atrocious thing I&#x27;ve witnessed is a paper claiming they evaluated their method on a benchmark and if you check the benchmark, the task they evaluated on doesn&#x27;t exit! They forked the benchmark and made their own task without being clear about it! [2]<p>Shit like this make me lose faith in certain science directions. And I&#x27;ve seen a couple of junior researcher giving it all up because they concluded it&#x27;s all just house of cards.<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2005.12729" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2005.12729</a><p>[2] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2202.02465" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2202.02465</a><p>Edit: also if you think that&#x27;s too tedious&#x2F;costly, reminder that publishers rake in record profits so the resources are already there <a href="https:&#x2F;&#x2F;youtu.be&#x2F;ukAkG6c_N4M" rel="nofollow noreferrer">https:&#x2F;&#x2F;youtu.be&#x2F;ukAkG6c_N4M</a>
评论 #37022978 未加载
eesmithalmost 2 years ago
&gt; the real test of a paper should be the ability to reproduce its findings in the real world. ...<p>&gt; What if all the experiments in the paper are too complicated to replicate? Then you can submit to [the Journal of Irreproducible Results].<p>Observational science is still a branch of science even if it&#x27;s difficult or impossible to replicate.<p>Consider the first photographs of a live giant squid in its natural habitat, published in 2005 at <a href="https:&#x2F;&#x2F;royalsocietypublishing.org&#x2F;doi&#x2F;10.1098&#x2F;rspb.2005.3158" rel="nofollow noreferrer">https:&#x2F;&#x2F;royalsocietypublishing.org&#x2F;doi&#x2F;10.1098&#x2F;rspb.2005.315...</a> .<p>Who seriously thinks this shouldn&#x27;t have been published until someone else had been able to replicate the result?<p>Who thinks the results of a drug trial can&#x27;t be published until they are replicated?<p>How does one replicate &quot;A stellar occultation by (486958) 2014 MU69: results from the 2017 July 17 portable telescope campaign&quot; at <a href="https:&#x2F;&#x2F;ui.adsabs.harvard.edu&#x2F;abs&#x2F;2017DPS....4950403Z&#x2F;abstract" rel="nofollow noreferrer">https:&#x2F;&#x2F;ui.adsabs.harvard.edu&#x2F;abs&#x2F;2017DPS....4950403Z&#x2F;abstra...</a> which required the precise alignment of a star, the trans-Neptunian object 486958 Arrokoth, and a region in Argentina?<p>Or replicate the results of the flyby of Pluto, or flying a helicopter on Mars?<p>Here&#x27;s a paper I learned about from &quot;In The Pipeline&quot;; &quot;Insights from a laboratory fire&quot; at <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s41557-023-01254-6" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s41557-023-01254-6</a> .<p>&quot;&quot;&quot;Fires are relatively common yet underreported occurrences in chemical laboratories, but their consequences can be devastating. Here we describe our first-hand experience of a savage laboratory fire, highlighting the detrimental effects that it had on the research group and the lessons learned.&quot;&quot;&quot;<p>How would peer replication be relevant?
评论 #37022057 未加载
评论 #37023128 未加载
评论 #37022479 未加载
waynecochranalmost 2 years ago
I spent a lot of my graduate years in CS implementing the details of papers only to learn that, time and time again, the paper failed to mention all the short comings and fail cases of the techniques. There are great exceptions to this.<p>Due to the pressure of &quot;publish or die&quot; there is very little honesty in research. Fortunately there are some who are transparent with their work. But for the most part, science is drowning in a sea of research that lacks transparency and replication short falls.
评论 #37024695 未加载
评论 #37025084 未加载
titzeralmost 2 years ago
In the PL field, conferences have started to allow authors to submit packaged artifacts (typically, source code, input data, training data, etc) that are evaluated separately, typically post-review. The artifacts are evaluated by a separate committee, usually graduate students. As usual, everything is volunteer. Even with explicit instructions, it is hard enough to even get the same <i>code</i> to run in a different environment and give the same results. Would &quot;replication&quot; of a software technique require another team to reimplement something from scratch? That seems unworkable.<p>I can&#x27;t even <i>imagine</i> how hard it would be to write instructions for another lab to successfully replicate an experiment at the forefront of physics or chemistry, or biology. Not just the specialized equipment, but we&#x27;re talking about the frontiers of Science with people doing cutting-edge research.<p>I get the impression that suggestions like these are written by non-scientists who do not have experience with the peer review process of <i>any</i> discipline. Things just don&#x27;t work like that.
评论 #37027326 未加载
评论 #37025898 未加载
评论 #37027936 未加载
leedrake5almost 2 years ago
Peer Review is the right solution to the wrong problem: <a href="https:&#x2F;&#x2F;open.substack.com&#x2F;pub&#x2F;experimentalhistory&#x2F;p&#x2F;science-is-a-strong-link-problem" rel="nofollow noreferrer">https:&#x2F;&#x2F;open.substack.com&#x2F;pub&#x2F;experimentalhistory&#x2F;p&#x2F;science-...</a><p>On replication, it is a worthwhile goal but the career incentives need to be there. I think replicating studies should be a part of the curriculum in most programs - a step toward getting a PhD in lieu of one of the papers.
评论 #37023823 未加载
hedoraalmost 2 years ago
The website dies if I try to figure out who the author (“sam”) is, but it sounds like they are used to some awful backwater of academia.<p>They have this idea that a single editor screens papers to decide if they are uninteresting or fundamentally flawed, then they want a bunch of professors to do grunt work litigating the correctness of the experiments.<p>In modern (post industrial revolution) branches of science, the work of determining what is worthy of publication is distributed amongst a program committee, which is comprised of reviewers. The editor &#x2F; conference organizers pick the program committee. There are typically dozens of program committee members, and authors and reviewers both disclose conflicts. Also, papers are anonymized, so the people that see the author list are not involved in accept&#x2F;reject decisions.<p>This mostly eliminates the problem where work is suppressed for political reasons, etc.<p>It is increasingly common for paper PDFs to be annotated with badges showing the level of reproducibility of the work, and papers can win awards for being highly reproducible. The people that check reproducibility simply execute directions from a separate reproducibility submission that is produced after the paper is accepted.<p>I argue the above approach is about 100 years ahead of what the blog post is suggesting.<p>Ideally, we would tie federal funding to double blind review and venues with program committees, and papers selected by editors would not count toward tenure at universities that receive public funding.
评论 #37026318 未加载
fastneutronalmost 2 years ago
As much as I agree with the sentiment, we have to admit it isn&#x27;t always practical. There&#x27;s only one LIGO, LHC or JWST, for example. Similarly, not every lab has the resources or know-how to host multi-TB datasets for the general public to pick through, even if they wanted to. I sure didn&#x27;t when I was a grad student.<p>That said, it infuriates me to no end when I read a Phys. Rev. paper that consists of a computational study of a particular physical system, and the only replicability information provided is the governing equation and a vague description of the numerical technique. No discretized example, no algorithm, and sure as hell no code repository. I&#x27;m sure other fields have this too. The only motivation I see for this behavior is the desire for a monopoly on the research topic on the part of authors, or embarrassment by poor code quality (real or perceived).
评论 #37033692 未加载
JR1427almost 2 years ago
One thing I think people are missing, is that labs replicate other experiments all the time as part of doing their own research. It&#x27;s just that the results are not always published, or not published in a like-for-like way.<p>But the information gets around. In my former field, everyone knew which were the dodgy papers, with results no-one could replicate.
评论 #37032659 未加载
Nevermarkalmost 2 years ago
Reproducibility would become a much higher priority if electronic versions of papers are required (by their distributors, archives, institutions, ...) to have reproduction sections, which the authors are encouraged to update over time.<p>UPDATABLE COVER PAGE:<p>Title Authors<p>Abstract<p><pre><code> Blah, blah, ... </code></pre> State of reproduction:<p><pre><code> Not reproduced. Successful reproductions: ...citations... Reproduction attempts: ...citations... Countering reproductions: ...citations... </code></pre> UPDATABLE REPRODUCTION SECTION ATTACHED AT END<p>Reproduction resources:<p><pre><code> Data, algorithms, processes, materials, ... </code></pre> Reproduction challenges:<p><pre><code> Cost, time, one-off events, ... </code></pre> Making this stuff more visible would help reproducers validated the value of reproduction to their home and funding institutions.<p>Having a standard section for this, with an initial state of &quot;Not reproduced&quot; provides more incentive for original workers to provide better reproduction info.<p>For algorithm and math work the reproduction could be served best with downloadable executable bundle.
评论 #37029785 未加载
nomilkalmost 2 years ago
<a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230130143126&#x2F;https:&#x2F;&#x2F;blog.everydayscientist.com&#x2F;replace-peer-review-with-peer-replication&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230130143126&#x2F;https:&#x2F;&#x2F;blog.ever...</a>
评论 #37022369 未加载
jxramosalmost 2 years ago
You know what I would love to see is metadata attributes surrounding a paper such as [retracted], [reproduced], [rejected], etc. We already have the preprint thing down. Some of these would be implied by being published, ie not a preprint. Maybe even a quick symbol for what method of proof was relied upon—-video evidence, randomized control trial, observational study, Sample count of n&gt;1000 (predefined inequality brackets), etc. I think having this quick digest of information would help an individual wade through a lot of studies quickly.
jonnycomputeralmost 2 years ago
If they are, in fact, implying that another lab should produce a matching data-set to try to replicate results, well, I&#x27;m sorry, but that won&#x27;t work, at least in a whole lot of fields. Data collection can be very expensive, and take a lot of time. It certainly is in my field.<p>If on, the other hand, they just want the raw data, and let others go to town on it in their own way, that&#x27;s fine, probably. Results that don&#x27;t depend on very particular details of the processing pipeline are probably more robust anyway.
评论 #37028039 未加载
jimmaralmost 2 years ago
How do you replicate a literature review? Theoretical physics? A neuro case? Research that relies upon natural experiments? There are many types of research. Not all of them lend themselves to replication, but they can still contribute to our body of knowledge. Peer review is helpful in each of these instances.<p>Science is a process. Peer review isn&#x27;t perfect. Replication is important. But it doesn&#x27;t seem like the author understands what it would take to simply replace peer review with replication.
评论 #37025122 未加载
whatever1almost 2 years ago
We can have tiers. Tier 1 peer reviewed. Tier 2 peer replicated. We can have it as a stamp on the papers.<p>All PhD programs have requirement for a minimum number of novel publications. We could add to the requirements a minimum number of replications.<p>But truth to be told, a PhD in science&#x2F; engineering will probably spend their first two years trying to replicate the SOTA anyway. It’s just that today you cannot publish this effort, nobody cares, except yourself and your advisor.
hgsgmalmost 2 years ago
The problem is equating publication with truth.<p>Publication is a <i>starting point</i>, not a <i>conclusion</i><p>Publication is submitting your code. It still needs to be tested, rolled out, evaluated, and time-tested.
amaialmost 2 years ago
It would already be a step in the right direction, if papers would also publish a VM with all their code, data and dependencies. It is nice to have the code (<a href="https:&#x2F;&#x2F;blog.arxiv.org&#x2F;2020&#x2F;10&#x2F;08&#x2F;new-arxivlabs-feature-provides-instant-access-to-code&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.arxiv.org&#x2F;2020&#x2F;10&#x2F;08&#x2F;new-arxivlabs-feature-prov...</a>), but without necessary dependencies, the correct OS, compiler Version, etc. replication is even with code often impossible.<p>Having running demos is another step in the right direction (see <a href="https:&#x2F;&#x2F;blog.arxiv.org&#x2F;2022&#x2F;11&#x2F;17&#x2F;discover-state-of-the-art-machine-learning-demos-on-arxiv&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.arxiv.org&#x2F;2022&#x2F;11&#x2F;17&#x2F;discover-state-of-the-art-...</a>).<p>But outside of computer science replication is even more difficult. Maybe if people would use standardized laboratories and robots, one could replicate findings by rerunning the robots code on another standard robot lab ( Basically the idea here is to virtualize laboratory work).<p>But even then for the biggest most complex experiments this will not work: Replicate CERN anyone?
10g1kalmost 2 years ago
Peer review is also not part of the scientific method. It&#x27;s nice, but it&#x27;s not strictly part of the method.<p>It may be more accurate to suggest that repeatability is part of the scientific method. But even that is not strictly true.<p>Consider, the single longest running scientific work was not repeatable, and was not shared with anyone outside the cadre of people doing it. Around 3000 years ago, a secretive caste of astrologers&#x2F;scribes watched the heavens, and recorded their observations for several centuries. They did not publish their findings, thus making them anecdotal (yes, that&#x27;s what anecdotal means, just that it wasn&#x27;t published). The exact circumstances and variables were never repeatable, due to the movements of the celestial bodies, precession, etc.<p>Similarly, the UQ pitch drop experiment, having not yet completed, has not been repeated. But it&#x27;s still an entirely valid scientific experiment.
okaleniukalmost 2 years ago
Back when I was active in academia, our publishers were reluctant to print source code or even repository links (that was largely before GitHub) but they could still share a paper source on demand. If you reference someone else&#x27;s paper and want to quote some formula, it is easier and less error prone to copy rather than retype.<p>At that point I thought about making a TeX interpreter so one could easily &quot;run a paper&quot; on their own data to see if the papers claims hold. As it turned out, people often write the same formula in multiple ways and to make a TeX interpreter you&#x27;d have to specify a &quot;runnable&quot; subset and convince anyone to use that subset instead of what they got used to. So the idea stalled.<p>In a few years, publishing a GitHub link along the paper became the norm, and the problem disappeared. At least in applied geometry, people do replicate each other results all the time.
评论 #37030587 未加载
hoobyalmost 2 years ago
Currently BOTH is being used - peer review is the first pass, reproduction the second.<p>Peer review might (or might not) weed out a few papers before they ever get to being reproduced - and that a paper &quot;passed&quot; peer review often means very little. (In some journals more, in some less).<p>You can&#x27;t replace peer review with peer replication. Reviewers often do volunteer work - supporting their field and the journal by checking submissions just for any grave errors&#x2F;mistakes. They often spend just 10 to 15 minutes per submission - for hundreds of submissions. It&#x27;s not realistic to ask those reviewers to do a full replication attempt for hundreds of submissions.<p>So any attempt to &quot;replace&quot; review with replication, would end up basically removing review altogether, without increasing the amount of replication attempts being made.
评论 #37033669 未加载
geysersamalmost 2 years ago
Both review and replication has their place. The mistake is treating researchers and the scientific community as a machine: &quot;pull here, fill these forms, comment this research, have a gold star&quot;<p>Let people review what they want, where they want, how they want. Let people replicate when they find interesting and motivating to work on.
bartwralmost 2 years ago
I review 10-20 papers a year.<p>It&#x27;s a ton of unpaid, volunteer work, if I want to be a high quality reviewer then it&#x27;s at least a day (at least 3 thorough reads, taking notes, writing the review, reviewer discussions, post rebuttal, back-and-forth for journals). I am lucky and privileged that my employer counts this towards work time. Only 20% papers get accepted in my domain.<p>Now if I had to spend a week on replicating a paper - and this is CS&#x2F;graphics, where it&#x27;s easy and &quot;free&quot; - I&#x27;d never volunteer to being a reviewer.<p>You&#x27;d need professional &quot;replicators&quot;, but who will pay for them? And who will be them - you need experts, and if you are an expert, you don&#x27;t want to merely replicate others people work full time, instead of working on your own innovation.
jhart99almost 2 years ago
Replication in many fields comes with substantial costs. We are unlikely to see this strategy employed on many&#x2F;most papers. I agree with other commenters that materials and methodology should be provided in sufficient detail so that others could replicate if desired.
cycomanicalmost 2 years ago
While I agree with the general sentiment of the paper and creating incentives for more replication is definitely a good idea, I do think the approach is flawed in several ways.<p>The main point is that the paper seriously underestimates the difficulty and time it requires to replicate experiments in many experimental fields. Who will decide which work needs to be replicated? Should capable labs somehow become bogged down with just doing replication work? Even if they don&#x27;t find the results not interesting?<p>In reality if labs find results interesting enough to replicate they will try to do so. The current LK-99 hurrah is a perfect example of that, but it happens on a much smaller scale all the time. Researchers do replicate and build on other work all the time, they just use that replication to create new results (and acknowledge the previous work) instead of publishing a &quot;we replicated paper&quot;.<p>Where things usually fail is in publication of &quot;failed replication&quot; studies, and those are tricky. It is not always clear if the original research was flawed or the people trying to reproduce made an error (again just have a look at what&#x27;s happening with LK-99 at the moment). Moreover, it can be politically difficult to try to publish a &quot;fail to reproduce&quot; result if you are small unknown lab, if the original result came from a big known group. Most people will believe that you are the one who made the error (and unfortunately big egos might get in the way, and the small lab will have a hard time).<p>More generally, in my opinion the lack of replication of results is just one symptom of a bigger problem in science today. We (as in society) have essentially turned the scientific environment increasingly competitive, under the guise of &quot;value for tax payer money&quot;. Academic scientists now have to constantly compete for grant funding, publish to keep the funding going. It&#x27;s incredibly competitive to even get in ... At the same time they are supposed to constantly provide big headlines for university press releases, communicate their results to the general public and investigate (and patent) the potential for commercial exploitation. No wonder we see less cooperation.
TrackerFFalmost 2 years ago
Seems to have been hugged to death.<p>But - a quick counterexample - as far as replication goes: What if the experiments were run on custom made or exceedingly expensive equipment? How are the replicators supposed to access that equipment? Even in fields which are &quot;easy&quot; to replicate - like machine learning - we are seeing barriers of entry due to expensive computing power. Or data collection. Or both.<p>But then you move over to physics, and suddenly you&#x27;re also dealing with these one-off custom setups, doing experiments which could be close to impossible to replicate (say you want to conduct experiments on some physical event that only occurs every xxxx years or whatever)
gordian-notalmost 2 years ago
The incentive should be to clear the way for tenure track<p>The junior faculty will clear the rotten apples at the top by finding flaws in their research and then will win the tenure that was lost in return<p>This will create a nice political atmosphere and improve science
throwawaymathsalmost 2 years ago
How about we create a Nobel prize for replication. One impressive replication or refutation from last decade (that holds up) gets the prize split up to three ways among the most important authors.
stauntonalmost 2 years ago
Let&#x27;s get people to publish their data and code first, shall we? That&#x27;s sooo much easier than demanding whole studies to be replicated... and people still don&#x27;t do it!
JR1427almost 2 years ago
I think this wouldn&#x27;t work, because many experiments need such specific equipment and expertise, that it would be hard to find labs that already have said equipment.
SubiculumCodealmost 2 years ago
Scientist publishes paper based on ABCD data.<p>Replicator: Do you know how much data I&#x27;ll need to collect? 11,000 particpants followed across multiple timepoints of MRI scanning. Show me the money.
评论 #37023036 未加载
pajushialmost 2 years ago
Why shouldn&#x27;t we hold science more accountable?<p>&quot;Science needs accounting&quot; is a search I had saved for months which really resonates with the idea of &quot;peer replication.&quot;<p>In accounting, you always have checks and balances, you never are counting money alone. In many cases, accountants duplicate their work to make sure that it is accurate.<p>Auditors are the corollary to the peer review process. They&#x27;re not there to redo your work, but to verify that your methods and processes are sound.
elashrialmost 2 years ago
Great, but who is going to fund the peer replication?. The economics of research now doesn&#x27;t even provide a compensation for peer review process time.
评论 #37023636 未加载
tonmoyalmost 2 years ago
We just need a second LHC with double the number of particle physicists in the world to replicate observation of the Higgs Boson, no big deal
SonOfLilitalmost 2 years ago
My first thought was &quot;this would never work, there is so much science being published and not enough resources to replicate it all&quot;.<p>Then I remembered that my main issue with modern academia is that everyone is incentivized to publish a huge amount of research that nobody cares about, and how I wish we would put much more work into each of much fewer research directions.
gxtalmost 2 years ago
So why haven&#x27;t &quot;science modules&quot; been developed yet? I see a library sized piece of equipment to physically perform the lab work that can be configured akin to CNC machining. Papers would then be submitted with the module program and be easily replicated by other labs.
ayakang31415almost 2 years ago
One of the Nobel prizes in Physics was the discovery of Higgs Boson at LHC. It cost billions of dollars just to build the facility, and required hundreds of physicists working on it to just conduct the experiment. You can&#x27;t replicate this. Although I fully agree that replication must come first when it is reasonably doable.
dongpingalmost 2 years ago
<a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230130143126&#x2F;https:&#x2F;&#x2F;blog.everydayscientist.com&#x2F;replace-peer-review-with-peer-replication&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230130143126&#x2F;https:&#x2F;&#x2F;blog.ever...</a>
ahmadmijotalmost 2 years ago
Quite related: nowadays there is this movement within scientific researches ie Open Science where the (raw) data from ones research is open source. And even methods for in-house fabrication and development together with its source code is open source (open hardware and open software)
ugh123almost 2 years ago
Why not just develop a standard &quot;replication instructions&quot; format that papers would need to adhere to? All methods, source code, ingredients, processes, etc are documented in a standard way. This could help tease out a lot of bullshit just by reading this section.
user6723almost 2 years ago
I remember showing someone raw video of a Safire plasma chamber keeping the ball of plasma lit for several minutes. They said they would need to see a peer reviewed paper. The presumption brought about by the enlightenment era that everyone should get a vote was a mistake.
freeopinionalmost 2 years ago
My mind automatically swapped out the words &quot;peer&quot; for &quot;code&quot;. It took my brain to interesting places. When I came back to the actual topic, I had accidentally built a great way to contrast some of the discussion offered in this thread.
评论 #37027187 未加载
hinkleyalmost 2 years ago
Is there space in the world for a few publications that only publish replicated work? Seems like that would be a reasonable compromise. Yes you were published, but were you published in Really Real Magazine? Get back to us when you have and we’ll discuss.
User23almost 2 years ago
One thing that everyone needs to remember about “peer review” is that it isn’t part of the scientific method, but rather that it was imposed on the scientific enterprise by government funding authorities. It’s basically JIRA for scientists.
6510almost 2 years ago
Seems like a great way for &quot;inferior&quot; journals to gain reputation. Counting citations seems a pretty silly formula&#x2F;hack. How often you say something doesn&#x27;t affect how true it is.
husamiaalmost 2 years ago
I review articles all the time. I look for things that tells me about their real work. there are nuances to some experiments that can&#x27;t be known without replication.
GuB-42almost 2 years ago
Peer review is not the end. When replication is particularly complex or expensive, peer review may just a way to see if the study is worth replicating.
andsoitisalmost 2 years ago
Why would you bother replicating someone else’s work (thereby validating it), when you could use that time and resources to do something novel?
abnryalmost 2 years ago
If scientists are going to complain that&#x27;s its too hard or too expensive to replicate their studies, then that just shows their work is BS.
评论 #37024421 未加载
评论 #37023109 未加载
评论 #37023771 未加载
评论 #37024156 未加载
评论 #37025938 未加载
seventytwoalmost 2 years ago
There would need to be an incentive structure where the first replications get (nearly) the same credit as the original publisher.
wcerfgbaalmost 2 years ago
What do we recommend for qualitative research, where replicability is not a quality criterion?
fodkodraszalmost 2 years ago
How would you peer-replicate observation of a rare, or unique event, for example in astronomy?
评论 #37022746 未加载
moelfalmost 2 years ago
I wish we can replicate the LHC
评论 #37025969 未加载
评论 #37023161 未加载
tinesalmost 2 years ago
&quot;Replace peer code review with &#x27;peer code testing.&#x27;&quot;<p>Probably not gonna catch on.
评论 #37022875 未加载
paulpauperalmost 2 years ago
this would not apply to math or something subjective such as literature. only experimental results need to be replicated.
j45almost 2 years ago
Can every thing be replicated in every field
评论 #37025383 未加载
hospadaralmost 2 years ago
I assume that the goal here is to reduce the number of not-actually-valid results that get published. Not-actually-valid results happen for lots of reasons (whoops did experiment wrong, mystery impurity, cherry picked data, not enough subjects, straight-up lie, full verification expensive and time consuming but this looks promising) but often there&#x27;s a common set of incentives: you must publish to get tenure&#x2F;keep your job, you often need to publish in journals with high impact factor [1].<p>High impact journals [6] tend to prefer exciting, novel, and positive results (we tried new thing and it worked so well!) vs negative results (we mixed up a bunch of crystals and absolutely none of them are room-temp superconductors! we&#x27;re sure of it!).<p>The result is that cherry picking data pays, leaning into confirmation bias pays, publishing replication studies and rigorous but negative results is not a good use of your academic inertia.<p>I think that creating a new category of rigor (i.e. journals that only publish independently replicated results) is not a bad idea, but: who&#x27;s gonna pay for that? If the incentive is you get your name on the paper, doesn&#x27;t that incentivize coming up with a positive result? How do you incentivize negative replications? What if there is only one gigantic machine anywhere that can find those results (LHC, icecube, etc, a very expensive spaceship)?<p>There might be easier and cheaper pathways to reducing bad papers - incentivizing the publishing of negative results and replication studies separately, paying reviewers for their time, coming up with new metrics for researchers that prioritize different kinds of activity (currently &quot;how much you&#x27;re cited&quot; and &quot;number of papers*journal impact&quot; things are common, maybe a &quot;how many results got replicated&quot; score would be cool to roll into &quot;do you get tenure&quot;? See [3] for more details). PLoS publish.<p>I really like OP&#x27;s other article about a hypothetical &quot;Journal of One Try&quot; (JOOT) [2] to enable publishing of not-very-rigorous-but-maybe-useful-to-somebody results. If you go back and read OLD OLD editions of Philosophical Transactions (which goes back to the 1600&#x27;s!! great time, highly recommend [4], in many ways the archetype for all academic journals), there are a ton of wacky submissions that are just little observations, small experiments, and I think something like that (JOOT let&#x27;s say) tuned up for the modern era would, if nothing else, make science more fun. Here&#x27;s a great one about reports of &quot;Shining Beef&quot; (literally beef that is glowing I guess?) enjoy [5]<p>[1] <a href="https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pmc&#x2F;articles&#x2F;PMC6668985&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pmc&#x2F;articles&#x2F;PMC6668985&#x2F;</a> [2] <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20220924222624&#x2F;https:&#x2F;&#x2F;blog.everydayscientist.com&#x2F;?p=2455" rel="nofollow noreferrer">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20220924222624&#x2F;https:&#x2F;&#x2F;blog.ever...</a> [3] <a href="https:&#x2F;&#x2F;www.altmetric.com&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.altmetric.com&#x2F;</a> [4] <a href="https:&#x2F;&#x2F;www.jstor.org&#x2F;journal&#x2F;philtran1665167" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.jstor.org&#x2F;journal&#x2F;philtran1665167</a> [5] <a href="https:&#x2F;&#x2F;www.jstor.org&#x2F;stable&#x2F;101710" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.jstor.org&#x2F;stable&#x2F;101710</a> [6] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Impact_factor" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Impact_factor</a>, see also <a href="https:&#x2F;&#x2F;clarivate.com&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;clarivate.com&#x2F;</a>
Hiromyalmost 2 years ago
Hola te amo