Its too bad research papers can't be organized like a git history. We'd see many forks that never end up as pull requests that are merged back to main. And probably forks of forks that stray too far from the founding paper's intent. It would be nice to more easily identify original versus derivative research. Maybe that solves a different problem. I like their suggestion though:<p>"I offer a pragmatic criterion: what makes a criticism important is how much it could change a result if corrected and how much that would then change our decisions or actions: to what extent it is a “difference which makes a difference”.
This is why issues of research fraud, causal inference, or biases yielding overestimates are universally important: because a ‘causal’ effect turning out to be zero effect or grossly overestimated will change almost all decisions based on such research; while on the other hand, other issues like measurement error or distributional assumptions, which are equally common, are often not important: because they typically yield much smaller changes in conclusions, and hence decisions."<p>So, 2 papers, both with data and claims.<p>The first is critiqued on its claim because the data, while correct with quality methodology, doesn't support the extent made in the claim. This critique is more meaningful because it changes the outcome of the paper and any decisions following its publication.<p>The second's claim is within the bounds of the data but there is a discrepancy in the data collection which is the source its its critique. Fixing that doesn't change the claim but may indicate more research is needed. This critique <i>could</i> change decisions made from publishing, but if the claim is still within the reason of the data, then likely not.<p>I had to think through that and I think I like it.
Related:<p><i>How Should We Critique Research?</i> - <a href="https://news.ycombinator.com/item?id=26834499">https://news.ycombinator.com/item?id=26834499</a> - April 2021 (51 comments)<p><i>How should we critique research?</i> - <a href="https://news.ycombinator.com/item?id=19981774">https://news.ycombinator.com/item?id=19981774</a> - May 2019 (20 comments)
It seems to me that the most important factor is not being mentioned here. That is money - who funds the research.<p>Its a simple enough problem - eg if you wanted papers to show anything - eg 'that koalas cause forest fires in Australia' (I know that's ridiculous!) - then you simply fund a bunch of papers. If you have 10 papers, and 2 are supportive, 2 are against and the remaining are ambiguous - you have a start! You take the supportive ones, and fund similar studies. Soon you have a lot of data that seems to say something in support of the thesis you like, but this is nothing to do with uncovering some underlying principle.<p>If you have a big enough pocket, you get the science you pay for.<p><a href="https://www.threads.net/@tmurrayhimself/post/C56uXtKM0y0" rel="nofollow">https://www.threads.net/@tmurrayhimself/post/C56uXtKM0y0</a><p>"studies show that all studies can be traced back to the guy with the most money"
With data quality standards, research ethics, and random audits.<p><a href="https://mchankins.wordpress.com/2013/04/21/still-not-significant-2/" rel="nofollow">https://mchankins.wordpress.com/2013/04/21/still-not-signifi...</a><p>The primary issue is circular references in citation, and non-cascaded retractions for known errata.<p>Typically thesis work tends to be reproducible most of the time, but around 12% to 17% of the hundreds of papers I read every month were nongeneralizable or worse outright BS.<p>It can be really disillusioning for many students...<p>Now I eat cheese goldfish crackers, and no longer care either way. Have a wonderful day =3
I am reviewing papers for my substack regularly and there is a structure I believe works well that focuses on potential impact, structure, reading comprehension, github availability (where applicable), and problem relevance.