Its too bad research papers can't be organized like a git history. We'd see many forks that never end up as pull requests that are merged back to main. And probably forks of forks that stray too far from the founding paper's intent. It would be nice to more easily identify original versus derivative research. Maybe that solves a different problem. I like their suggestion though:<p>"I offer a pragmatic criterion: what makes a criticism important is how much it could change a result if corrected and how much that would then change our decisions or actions: to what extent it is a “difference which makes a difference”.
This is why issues of research fraud, causal inference, or biases yielding overestimates are universally important: because a ‘causal’ effect turning out to be zero effect or grossly overestimated will change almost all decisions based on such research; while on the other hand, other issues like measurement error or distributional assumptions, which are equally common, are often not important: because they typically yield much smaller changes in conclusions, and hence decisions."<p>So, 2 papers, both with data and claims.<p>The first is critiqued on its claim because the data, while correct with quality methodology, doesn't support the extent made in the claim. This critique is more meaningful because it changes the outcome of the paper and any decisions following its publication.<p>The second's claim is within the bounds of the data but there is a discrepancy in the data collection which is the source its its critique. Fixing that doesn't change the claim but may indicate more research is needed. This critique <i>could</i> change decisions made from publishing, but if the claim is still within the reason of the data, then likely not.<p>I had to think through that and I think I like it.