Urgh, Botometer. I guessed it would be that as soon as I saw the headline.<p>I used to work on fighting bots. Botometer has a long and storied history of making totally false claims about Twitter accounts. In the past it identified something like 50% of US Congress as bots. It has unfortunate credibility because it's a machine learning model produced by academic "research", but no credibility is deserved. The academics who created it are, in my view, guilty of gross intellectual misconduct.<p>Botometer has had an absurdly high FP rate for years and Twitter are right to call Musk out for using it, though presumably Musk was just as conned as everyone else who has used this tool. Really the Botometer papers should all be retracted, as should any papers that relied on it, and then the researchers who created it should be fired. Unfortunately this would require retracting huge chunks of academic social bot research - Botometer is just <i>that</i> prevalent.<p>A thorough debunking of the model can be found here by Gallwitz and Kreil:<p><a href="https://arxiv.org/pdf/2207.11474.pdf" rel="nofollow">https://arxiv.org/pdf/2207.11474.pdf</a><p><i>"In this paper, we point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots. Furthermore, we empirically investigate the validity of peer-reviewed Botometer-based studies by closely and systematically inspecting hundreds of accounts that had been counted as social bots. We were unable to find a single social bot. Instead, we found mostly accounts undoubtedly operated by human users, the vast majority of them using Twitter in an inconspicuous and unremarkable fashion without the slightest traces of automation. We conclude that studies claiming to investigate the prevalence, properties, or influence of social bots based on Botometer have, in reality, just investigated false positives and artifacts of this approach."</i><p>It took them years to get this paper published, and when they first announced their work the Botometer guys simply called them "academic trolls" and ignored the problems they reported (except for hard-coding their examples to be correct!).<p>If a full paper is too much, I've written a couple of essays about the problems of social bot research. This one summarizes an earlier/longer version of the GK paper above:<p><a href="https://blog.plan99.net/fake-science-part-ii-bots-that-are-not-c66129e5e3f5" rel="nofollow">https://blog.plan99.net/fake-science-part-ii-bots-that-are-n...</a><p>and that earlier paper cites another essay I wrote back in 2017 about a non-Botometer based Twitter bot paper:<p><a href="https://blog.plan99.net/did-russian-bots-impact-brexit-ad66f08c014a" rel="nofollow">https://blog.plan99.net/did-russian-bots-impact-brexit-ad66f...</a><p>Given these issues it's not hugely surprising that Musk believes incorrect things about Twitter bots. The field of Twitter bot research is massive with over 10,000 papers. The original Botometer paper has been cited over 800 times. He is far from alone - many politicians and journalists have all fallen for these claims too. Twitter should probably have pushed back far more strongly, far earlier, but the general convention of never criticizing academics regardless of how dishonest they become defanged them and they never went further than a rather mildly worded blog post. Now the chickens have come home to roost. Misinformation spread by "misinformation researchers" is creating real world legal consequences.