"You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code." -Ken Thompson<p>"Given enough eyeballs, all bugs are shallow." -Eric S Raymond pretending to quote Linus Torvalds by mis-attributing his own wishful fallacy as "Linus's Law"<p>Then there's Theo de Raadt's salty quote about ESR's ridiculous "many eyes" argument that Raymond deceptively calls "Linus's Law":<p><a href="https://groups.google.com/g/fa.openbsd.tech/c/gypClO4qTgM/m/UzcgS_iYn1IJ" rel="nofollow">https://groups.google.com/g/fa.openbsd.tech/c/gypClO4qTgM/m/...</a><p>"Oh right, let's hear some of that "many eyes" crap again. My favorite part of the "many eyes" argument is how few bugs were found by the two eyes of Eric (the originator of the statement). All the many eyes are apparently attached to a lot of hands that type lots of words about many eyes, and never actually audit code." -Theo de Raadt on ESR's "Linus's Law"<p>Actually, that fallacious "many eyes" argument was "formulated" by Eric S Raymond (to whom Theo was referring as "the originator of the statement"), which ESR misleadingly named "Linux's Law" in "honor" of Linus Torvalds, who never even made that claim, which is ironic because it actually dishonors Linus by being an invalid fallacy.<p><a href="https://en.wikipedia.org/wiki/Linus%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Linus%27s_law</a><p>>Validity<p>>In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate. While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".<p>>The persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum. Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain. In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking". Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.<p>>Empirical support of the validity of Linus's law was obtained by comparing popular and unpopular projects of the same organization. Popular projects are projects with the top 5% of GitHub stars (7,481 stars or more). Bug identification was measured using the corrective commit probability, the ratio of commits determined to be related to fixing bugs. The analysis showed that popular projects had a higher ratio of bug fixes (e.g., Google's popular projects had a 27% higher bug fix rate than Google's less popular projects). Since it is unlikely that Google lowered its code quality standards in more popular projects, this is an indication of increased bug detection efficiency in popular projects.<p>The little experience Raymond DOES have auditing code has been a total fiasco and embarrassing failure, since his understanding of the code was incompetent and deeply tainted by his preconceived political ideology and conspiracy theories about climate change, which was his only motivation for auditing the code in the first place. His sole quest was to deceptively discredit the scientists who warned about climate change. The code he found and highlighted was actually COMMENTED OUT, and he never addressed the fact that the scientists were vindicated.<p><a href="http://rationalwiki.org/wiki/Eric_S._Raymond" rel="nofollow">http://rationalwiki.org/wiki/Eric_S._Raymond</a><p>>During the Climategate fiasco, Raymond's ability to read other peoples' source code (or at least his honesty about it) was called into question when he was caught quote-mining analysis software written by the CRU researchers, presenting a commented-out section of source code used for analyzing counterfactuals as evidence of deliberate data manipulation. When confronted with the fact that scientists as a general rule are scrupulously honest, Raymond claimed it was a case of an "error cascade," a concept that makes sense in computer science and other places where all data goes through a single potential failure point, but in areas where outside data and multiple lines of evidence are used for verification, doesn't entirely make sense. (He was curiously silent when all the researchers involved were exonerated of scientific misconduct.)