The paper was a meta-analysis of nine unpublished studies. While the issue of non-publication of negative results is extremely important, this should not be the test case for it.<p>I agree with other comments that the lack of nul-results in journals is a massive showstopper for epistemology and confidence in scientific work. In my experience during my PhD, academics were unwilling to submit nuls, they wanted to find the actual answer and publish that instead - which leads to delays of years, leaving well known and obvious errors in existing papers unchallenged in public, and potentially leading to that scientist <i>never even submitting the nul result.</i>
At this point, I think Google Scholar should step in and just put a replications section beside every scientific publication. People should be able to quickly and easily know how many times a study has been attempted to replicate and, of those attempted, how many times it has actually successfully replicated.<p>It's unfortunate that replications aren't taken more seriously these days, but it also doesn't help that, when there are actual replications, you have to scour the internet for them rather than having them readily available to you.
The narrative would be more persuasive if it incorporated a story of how the paper evolved meaningfully in response to peer criticism. The question lingering in my mind after reading this is whether and how the paper was substantially revised (in light of reviewer feedback) between rejections. I'm sure it was (it has to have been, right?), but we don't get that feeling from the blog post. The author(s?) should have received a large amount of very good feedback between rejections from well-meaning peers in their scientific community. I don't recall reading about incorporating any of that feedback into subsequent revisions of the paper. The term "meta-analysis" probably should have been dropped after the first (pointed) rejection, for example, and the paper should probably have been broken down into two or three smaller papers rather than submitted as a 'meta-analysis' of unpublished work.<p>This is not to say that peer feedback wasn't taken seriously. I don't know that at all. But if the goal is to persuade a skeptical audience that academic publishing is broken, the author should articulate how they followed best practices in response to rejection letters from peer-reviewed journals. The alternative is to sound arrogant and self-defeating, which I'm sure was not the intent!
Forgive me for not being an academic, so maybe this question is moot.<p>Why isn't there a place that links to a given paper so that discussion about the paper can be centralized? It could also contain links to papers that link to that paper, among them would/could be the failure to replicate information, adding to the discussion. And I don't really mean a topical "this is what's new" site, I mean a historical "This is the paper, and this is what people have said about it." sort of site.<p>This seems like a fairly elementary idea. The only seeming difficult bits I see are:<p>a) Getting (legal?) access to these papers.<p>b) Dealing with a large number of papers (millions?).<p>c) Authenticating users to keep the discussion level high.<p>d) Moderating the discussion in a way that doesn't piss off academia (impossible?).<p>e) Keeping the number of these sites (competition, if you will) low so that the discussion is not fractured between them.<p>It would seem like one of the "Information wants to be free" sites that host the papers that everyone shares with each other would be a great place to start something like this.
There's a psychology journal[1] dedicated to only publishing null-hypothesis results.<p>[1] <a href="http://www.jasnh.com/about.html" rel="nofollow">http://www.jasnh.com/about.html</a>
So broken. I'm not involved in academia, so the most I can contribute are upvotes here and there, and giving respect to those who push against the current.
1. There are many places this could have been published without an importance review, eg PLoS ONE.<p>2. I think anyone interested in the replication problem needs to read this piece [1] by Peter Walter. As he put it: "It is much harder to replicate than to declare failure.".<p>[1] <a href="http://www.ascb.org/on-reproducibility-and-clocks/" rel="nofollow">http://www.ascb.org/on-reproducibility-and-clocks/</a>
Seems to me this issue is getting to the point where it could become an existential threat to the credibility of science in general. Note how climate-change deniers have recently used these sorts of arguments to challenge the consensus - is it really so far-fetched to argue that perhaps climate scientists are as biased as researchers in areas such as medicine and linguistics?<p>The pay-wall, blind peer review process seems broken beyond repair. There needs to be a better, robust method to publish every relevant study that is not utter crank, and get some sort of crowd-sourced consensus from researchers with credible reputations.
This seems to be the original work in question: <a href="https://www.researchgate.net/publication/8098564_Reading_Acquisition_Developmental_Dyslexia_and_Skilled_Reading_Across_Languages_A_Psycholinguistic_Grain_Size_Theory" rel="nofollow">https://www.researchgate.net/publication/8098564_Reading_Acq...</a>
There should be a failure to replicate journal. The standards committee would should be all about rigor so that just getting published there would be a demonstration of technique and ability if not headlines.
Any journal that refuses to publish a failure to replicate research they originally published without proper reasoning should be closed down. That journal should have such a reputational black mark next to it that nobody would want to publish there and anyone who already had should be at the door with pitchforks and torchers for tarnishing the researchers' reputations.<p>If it's was important enough to publish research saying "here's something" then it's important enough to publish properly done research showing "actually, probably it's nothing." By definition. Or it's not science, it's f<i></i>king marketing and the journal should be treated with the same scientific reverence we reserve for pepsi-cola advertisements from the 1990s.
Some of those reviews are good materials for <a href="http://shitmyreviewerssay.tumblr.com/" rel="nofollow">http://shitmyreviewerssay.tumblr.com/</a>
Put it on arXiv or f1000 for fucks sake. Who actually believes psychology papers anyways? The vast majority are fishing expeditions as best i can tell.<p>When the field starts enforcing minimal standards (as expected for, say, clinical trials, or even genetics studies nowadays) maybe someone will give a shit. Until then people like this guy who actually seek the truth will be ostracized.