Related to <a href="https://news.ycombinator.com/item?id=26887670" rel="nofollow">https://news.ycombinator.com/item?id=26887670</a>
> We believe that an effective an immediate action would be to update the code of conduct of OSS, such as adding a term like, "by submitting the patch, I agree to not intend to introduce bugs."<p>Do the authors of this study honestly believe that the reason malicious actors intentionally introduce security vulnerabilities in software is because the "code of conduct of OSS" doesn't prohibit it? Do the malicious actors read the code of conduct and think, "Oops, I can't be malicious here, I'll try somewhere else".
<a href="https://www.phoronix.com/scan.php?page=news_item&px=University-Ban-From-Linux-Dev" rel="nofollow">https://www.phoronix.com/scan.php?page=news_item&px=Universi...</a><p>I'm shocked that it had to come to this, but if the kernel developers deem it necessary to remove every commit from the university and ban them from commiting something has gone horribly wrong.<p>> Academic research should NOT waste the time of a community.<p><a href="https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95332709BAE7@northeastern.edu/" rel="nofollow">https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...</a><p>Agree 100%
I wonder if the people involved in approving and conducting this research are aware of the ACM's Code of Ethics. I can see pretty clear links to at least two or three of the code's ethical principles. This seems to be a pretty serious breakdown of the researchers understanding their ethical responsibilities, but also the review and approval of research projects.
I wonder how come IRB (Institutional Review Board) approved this paper in regards with etical concerns. This is obviously a research on humans and they didn't give their approval for that.
(fix: formatting)
Though I disagree with the research in general, if you <i>did</i> want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.<p>That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.<p>I say this since the crux of what they're trying to discover is:<p>1. In OSS anyone can commit.<p>2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.<p>3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).
"In further research, we demonstrate it is possible to issue a denial-of-service against a community potluck event by eating all the food ourselves."
I wonder how hard their CS Departments rankings are going to drop, and how much funding they're going to end up losing over this.<p>Getting banned from committing to the most important and critical open source project out there cannot be good for a university.
This was pretty a pretty brazen breach of responsibility by these researchers. The fact that they exposed end users to risk and appear to not have clued in the upper levels of kernel development were serious lapses. While code of ethics and ethical are a review, it doesn't appear that there is much in the way of help for experimental design that could have helped the researchers do this in a smarter way from the start.<p>That said, I believe the punishment for the failing here should be measured. I don't think they should just blatantly fire a professor for doing this, though a severe reprimand is in order. Also, banning an entire university could probably be toned down a bit.<p>The end result of this will hopefully be much more in-depth code review, better tests, better fuzzing, and more deployment of static analysis tools that can catch errors like this.
So the NSF funded some wholly unethical research that does nothing other than prove as stated in the conclusion that the openness of open source means it’s doomed to be forever insecure. What a horrible moment.
I would prefer to see research on the known incidents where things like this have happened in the wild. AFAICT the most common route for maliciously introducing vulnerabilities is through dependencies. Old npm libraries getting taken over by people who introduce cryptocurrency miners, that sort of thing. Pull requests that fix a real bug and also update the version number of some dependency, how often does the reviewer really analyze the new version of the dependency to see if it contains anything malicious?
Reading the paper, it seems to attempt to address concerns about ethics (none of the patches got past the email stage, they never got into git) and timewasting (although reviewing the emails will have taken community time, they point out that they did also fix some real code issues). See section "VI - A - Ethical considerations"