> and whether online critiques of past work constitute “bullying” or shaming. The PubPeer comments are just a tiny part of that debate. But it can hit a nerve.<p>> Susan Fiske, a former president of the Association for Psychological Science, alluded to Statcheck in an interview with Business Insider, calling it a “gotcha algorithm.”<p>> The draft of the article said these online critics were engaging in “methodological terrorism.”<p>If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science. There is a fundamental misunderstanding of how knowledge is created through the scientific process if the verification step is considered to be offensive or taboo. That anyone in the field of psychology would even be comfortable publically espousing a non-scientific worldview like that means that psychologists are not being properly educated in the scientific method and should not be in the business of producing research since they do not have a mature understanding of what "scientific" implies.
What a clever and, dare I say it, fantastically useful experiment!<p>So much less harm than even "door knob twisting" type explorations - no, this was using published works and pretty much running them through a process to verify or not verify accuracy.<p>Unsolicited? So what! As a practiced writer I make unsolicited judgments on language usage all the time. Are these people that completely write from their own minds and don't use a spell check or grammar check program of any sort before sending their material for editorial review? I'd strongly doubt it, because it's a tool to make communication more accurate. Math and formulas having a similar procedural check sounds quite constructive to me.<p>It's not bullying to point out errors; it's bullying to use the existence of errors to belittle or insult a person. I don't see that happening here. Sure, it's a little sterile or "cold" in this fashion, but I think that's for the best if such a process / tool can gain acceptance. It just spits out results and I think that's all it should do. Neat to read about.
I find it very disconcerting that people are trying to fend off criticism of previously published studies by calling it "bullying" or sometimes worse. What do feelings have to do with science?
Here's the GitHub page:<p><a href="https://github.com/MicheleNuijten/statcheck" rel="nofollow">https://github.com/MicheleNuijten/statcheck</a><p>And if you're curious how it works, as I was:<p><i>Statcheck uses regular expressions to find statistical results in APA format. When a statistical result deviates from APA format, statcheck will not find it. The APA formats that statcheck uses are: t(df) = value, p = value; F(df1,df2) = value, p = value; r(df) = value, p = value; [chi]2 (df, N = value) = value, p = value (N is optional, delta G is also included); Z = value, p = value. All regular expressions take into account that test statistics and p values may be exactly (=) or inexactly (< or >) reported. Different spacing has also been taken into account.</i>
> There’s a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute “bullying” or shaming.<p>Science is fundamentally reputation-driven. One of, if not the primary incentive that encourages scientists to do science work is the chance of raising their prestige. Citations are one very quantifiable yardstick for this.<p>If positive social sanctions are a driving force for science, then it's entirely reasonable that negative sanctions should come into play too. If you can well-cited paper and attract fame, then a poor paper should likewise attract shame.<p>Otherwise you have a positive feedback loop where once a scientist has attracted enough prestige, they are untouchable. We need negative feedback to balance that out.
This coupled with the Automatic Statistician (<a href="https://www.automaticstatistician.com/index/" rel="nofollow">https://www.automaticstatistician.com/index/</a>) will help fix a lot of biases and human errors that creep into scientific research.
The article sadly doesn't report on the false positive rate of statcheck. I assume the paper does?<p>I mean, it just uses a basic regular expression, I can see it easily performing bad checks. I assume the authors take this into account.
A good distinction between "peer reviewed" vs "computer verified"<p>>“The literature is growing faster and faster, peer review is overstrained, and we need technology to help us out,”<p>This is a problem in every field, not just Psychology.<p>I want someone to tell me the distribution (or average ratio) of papers read to papers written.<p>Every thesis written is supposed to add some delta to the state of the art. But there is no method for doing a diff between past and previous versions of human knowledge. How to make science less redundant and more efficient?<p>I dream of aggregators for everything.
I can certainly understand people being nervous about academic debate moving to social media. It would be a hassle for climate scientists if every paper got a brigade of climate change deniers criticising it and you had to respond to those criticisms.<p>But this example - someone notifying you there's a mistake in your paper, when there really is a mistake? That seems like a strong argument /for/ academic debate via social media, not /against/ it.
> some found the emails annoying from PubPeer [since PubPeer notifies authors of comments] if Statcheck found no potential errors<p>I would.<p>> There’s a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute “bullying” or shaming.<p>It's facts about your work. Learn to handle it or quit pretending to be a scientist.<p>> The gist of her article was that she feared too much of the criticism of past work had been ceded to social media, and that the criticism there is taking on an uncivil tone<p>Valid enough point. Criticism and correction can be done in a civil manner, and in an accepted forum.
For more context, see <a href="http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/" rel="nofollow">http://andrewgelman.com/2016/09/21/what-has-happened-down-he...</a>
Remember when writers could do spell-checking and grammar-checking by "running a program" on their text files?<p>Here we have numbers-checking working the same way.<p>I bet you this sort of feature gets built in to word processors eventually, and puts wavy red lines under the results it flags.<p>We've had this sort of real-time "syntax" checking in software engineering for half a generation. It seems wise for other disciplines to consider adopting it too.<p>It's obviously got to be discretionary, just like spell-check is discretionary in browsers.<p>We will get a new genre of humor, thought "statcheck fail."
While it is definitely to the benefit of all that the bot emails authors when it finds mistakes, emailing when it doesn't find anything is a dark pattern. Reminds me of those bots that spam me after scraping my linkedin.
Why does the article focus even for a paragraph on whether egos would be bruised? If the result is a general improvement to the readers' understanding that is, as far as I'm concerned, case closed. Good on them!