TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The right thing for the wrong reasons: FLOSS doesn't imply security

37 pointsby Seirdyover 3 years ago

9 comments

matheusmoreiraover 3 years ago
It does imply trust however. I go out of my way to read source code and I'm a lot more comfortable using code I've read compared to opaque binaries nobody really knows a thing about. Free software is not immune to vulnerabilities but it is quite resistant to people doing shady stuff just because they think they can get away with it. Now with reproducible builds it's gonna be even more trustworthy.
评论 #30371315 未加载
评论 #30375356 未加载
nonrandomstringover 3 years ago
Seirdy&#x27;s article is mostly focused on bug-finding in the binary domain, by fuzzing, memory analysis, decompiling and other techniques. He makes the entirely correct observation that having source to audit is only one part of thorough debugging, because many exploits are only manifest at runtime in the context of specific hardware, operating systems, and build chains.<p>Seirdy does not denigrate source auditing as some interpretations of his words here seem to say. This feels like quite a mature article and there are implications he touches on but doesn&#x27;t fully explore like Thompson&#x27;s &quot;Trusting trust&quot; rabbit hole of the &quot;malicious compiler&quot; and the fact that security by obscurity has some serious clout if you can compile for non-standard hardware. The Non Specific Agency may have a zero-day for your Debian package, but it won&#x27;t irk the program on your FPGA emulated Fairchild F8 Microprocessor.
评论 #30375981 未加载
viktorcodeover 3 years ago
I didn&#x27;t like few implications author makes.<p>&gt; One of the biggest parts of the Free and Open Source Software definitions is the freedom to study a program and modify it; in other words, access to editable source code.<p>You don&#x27;t have to have FLOSS-compatible open source license to run security audits on the code. For instance: Microsoft allowed government entities to check Windows security-related source code for many years. Just having access to the code is enough for audits, regardless of the license.<p>&gt; One such reason is that source code is necessary to have any degree of transparency into how a piece of software operates, and is therefore necessary to determine if it is at all secure or trustworthy. Although security through obscurity is certainly not a robust measure...<p>If code is not open sourced it doesn&#x27;t mean security through obscurity is employed. It simply means there&#x27;s no public access to the code. This is a very common misconception.
评论 #30370427 未加载
评论 #30370598 未加载
评论 #30375724 未加载
评论 #30370581 未加载
kazinatorover 3 years ago
&quot;Imply&quot; is a big word that requires logical proof. Even formal verification may fall short of implying security, unless it&#x27;s from the transistor level on up.<p>The issue is that, rather, non-FOSS implies the existence of significant hindrances in the area of security. It also implies a dependency on a single vendor, and their responsiveness to incidents.
phendrenad2over 3 years ago
Yeah, I&#x27;m not sold on the &quot;FLOSS is more secure&quot; idea either. Good on you for trying to write down some arguments against it. But unfortunately, comparing the security of closed-source software to the security of open-source software is too difficult. You&#x27;d basically have to take two competing programs, one open, one closed, and spend many human-hours trying to hack both, and then publish your results. I believe that in such a test, open-source would perhaps have MORE security bugs. That&#x27;s just what my gut tells me.<p>Instead, people simply look at the number of security holes patched, and see that FLOSS projects report and fix many more holes. So surely FLOSS is more secure, right? Or, they rely on folk wisdom like &quot;With enough eyes all bugs are shallow&quot; which aren&#x27;t any more proven in the real world than &quot;FLOSS is more secure&quot;.
btdmasterover 3 years ago
The most concrete counterexample I can think of is the Windows XP leak -- as source code was leaked Microsoft seemed to be really annoyed because security flaws were kept all the way up to Windows 10 (in the name of backwards compatibility).
评论 #30375761 未加载
DonHopkinsover 3 years ago
&quot;You can&#x27;t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.&quot; -Ken Thompson<p>&quot;Given enough eyeballs, all bugs are shallow.&quot; -Eric S Raymond pretending to quote Linus Torvalds by mis-attributing his own wishful fallacy as &quot;Linus&#x27;s Law&quot;<p>Then there&#x27;s Theo de Raadt&#x27;s salty quote about ESR&#x27;s ridiculous &quot;many eyes&quot; argument that Raymond deceptively calls &quot;Linus&#x27;s Law&quot;:<p><a href="https:&#x2F;&#x2F;groups.google.com&#x2F;g&#x2F;fa.openbsd.tech&#x2F;c&#x2F;gypClO4qTgM&#x2F;m&#x2F;UzcgS_iYn1IJ" rel="nofollow">https:&#x2F;&#x2F;groups.google.com&#x2F;g&#x2F;fa.openbsd.tech&#x2F;c&#x2F;gypClO4qTgM&#x2F;m&#x2F;...</a><p>&quot;Oh right, let&#x27;s hear some of that &quot;many eyes&quot; crap again. My favorite part of the &quot;many eyes&quot; argument is how few bugs were found by the two eyes of Eric (the originator of the statement). All the many eyes are apparently attached to a lot of hands that type lots of words about many eyes, and never actually audit code.&quot; -Theo de Raadt on ESR&#x27;s &quot;Linus&#x27;s Law&quot;<p>Actually, that fallacious &quot;many eyes&quot; argument was &quot;formulated&quot; by Eric S Raymond (to whom Theo was referring as &quot;the originator of the statement&quot;), which ESR misleadingly named &quot;Linux&#x27;s Law&quot; in &quot;honor&quot; of Linus Torvalds, who never even made that claim, which is ironic because it actually dishonors Linus by being an invalid fallacy.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Linus%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Linus%27s_law</a><p>&gt;Validity<p>&gt;In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a &quot;mantra&quot; of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate. While closed-source practitioners also promote stringent, independent code analysis during a software project&#x27;s development, they focus on in-depth review by a few and not primarily the number of &quot;eyeballs&quot;.<p>&gt;The persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond&#x27;s dictum. Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain. In 2015, the Linux Foundation&#x27;s executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014&#x27;s largest global open source software vulnerabilities, he says, &quot;In these cases, the eyeballs weren&#x27;t really looking&quot;. Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.<p>&gt;Empirical support of the validity of Linus&#x27;s law was obtained by comparing popular and unpopular projects of the same organization. Popular projects are projects with the top 5% of GitHub stars (7,481 stars or more). Bug identification was measured using the corrective commit probability, the ratio of commits determined to be related to fixing bugs. The analysis showed that popular projects had a higher ratio of bug fixes (e.g., Google&#x27;s popular projects had a 27% higher bug fix rate than Google&#x27;s less popular projects). Since it is unlikely that Google lowered its code quality standards in more popular projects, this is an indication of increased bug detection efficiency in popular projects.<p>The little experience Raymond DOES have auditing code has been a total fiasco and embarrassing failure, since his understanding of the code was incompetent and deeply tainted by his preconceived political ideology and conspiracy theories about climate change, which was his only motivation for auditing the code in the first place. His sole quest was to deceptively discredit the scientists who warned about climate change. The code he found and highlighted was actually COMMENTED OUT, and he never addressed the fact that the scientists were vindicated.<p><a href="http:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Eric_S._Raymond" rel="nofollow">http:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Eric_S._Raymond</a><p>&gt;During the Climategate fiasco, Raymond&#x27;s ability to read other peoples&#x27; source code (or at least his honesty about it) was called into question when he was caught quote-mining analysis software written by the CRU researchers, presenting a commented-out section of source code used for analyzing counterfactuals as evidence of deliberate data manipulation. When confronted with the fact that scientists as a general rule are scrupulously honest, Raymond claimed it was a case of an &quot;error cascade,&quot; a concept that makes sense in computer science and other places where all data goes through a single potential failure point, but in areas where outside data and multiple lines of evidence are used for verification, doesn&#x27;t entirely make sense. (He was curiously silent when all the researchers involved were exonerated of scientific misconduct.)
评论 #30371832 未加载
评论 #30370469 未加载
SAI_Peregrinusover 3 years ago
FLOSS doesn&#x27;t <i>imply</i> security, it <i>allows</i> verifying it. Closed-source software doesn&#x27;t allow verifying it, so it&#x27;s safest to assume it&#x27;s insecure.
评论 #30437145 未加载
Bancakesover 3 years ago
Lack of FOSS implies lack of security. By default software is insecure and has malfeatures which need turning off.
评论 #30371009 未加载
评论 #30371966 未加载