I assume that the goal here is to reduce the number of not-actually-valid results that get published. Not-actually-valid results happen for lots of reasons (whoops did experiment wrong, mystery impurity, cherry picked data, not enough subjects, straight-up lie, full verification expensive and time consuming but this looks promising) but often there's a common set of incentives: you must publish to get tenure/keep your job, you often need to publish in journals with high impact factor [1].<p>High impact journals [6] tend to prefer exciting, novel, and positive results (we tried new thing and it worked so well!) vs negative results (we mixed up a bunch of crystals and absolutely none of them are room-temp superconductors! we're sure of it!).<p>The result is that cherry picking data pays, leaning into confirmation bias pays, publishing replication studies and rigorous but negative results is not a good use of your academic inertia.<p>I think that creating a new category of rigor (i.e. journals that only publish independently replicated results) is not a bad idea, but: who's gonna pay for that? If the incentive is you get your name on the paper, doesn't that incentivize coming up with a positive result? How do you incentivize negative replications? What if there is only one gigantic machine anywhere that can find those results (LHC, icecube, etc, a very expensive spaceship)?<p>There might be easier and cheaper pathways to reducing bad papers - incentivizing the publishing of negative results and replication studies separately, paying reviewers for their time, coming up with new metrics for researchers that prioritize different kinds of activity (currently "how much you're cited" and "number of papers*journal impact" things are common, maybe a "how many results got replicated" score would be cool to roll into "do you get tenure"? See [3] for more details). PLoS publish.<p>I really like OP's other article about a hypothetical "Journal of One Try" (JOOT) [2] to enable publishing of not-very-rigorous-but-maybe-useful-to-somebody results. If you go back and read OLD OLD editions of Philosophical Transactions (which goes back to the 1600's!! great time, highly recommend [4], in many ways the archetype for all academic journals), there are a ton of wacky submissions that are just little observations, small experiments, and I think something like that (JOOT let's say) tuned up for the modern era would, if nothing else, make science more fun. Here's a great one about reports of "Shining Beef" (literally beef that is glowing I guess?) enjoy [5]<p>[1] <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6668985/" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6668985/</a>
[2] <a href="https://web.archive.org/web/20220924222624/https://blog.everydayscientist.com/?p=2455" rel="nofollow noreferrer">https://web.archive.org/web/20220924222624/https://blog.ever...</a>
[3] <a href="https://www.altmetric.com/" rel="nofollow noreferrer">https://www.altmetric.com/</a>
[4] <a href="https://www.jstor.org/journal/philtran1665167" rel="nofollow noreferrer">https://www.jstor.org/journal/philtran1665167</a>
[5] <a href="https://www.jstor.org/stable/101710" rel="nofollow noreferrer">https://www.jstor.org/stable/101710</a>
[6] <a href="https://en.wikipedia.org/wiki/Impact_factor" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Impact_factor</a>, see also <a href="https://clarivate.com/" rel="nofollow noreferrer">https://clarivate.com/</a>