> To make clear to everyone that I'm absolutely not joking:<p>> $ base64 private.key<p>> RBjU5k0Dfdqtyzx4ox6PfQoqrdCft/aFJieD2DQvloY=<p>> I'm publically leaking the key myself now. Don't trust it.<p>What?<p><a href="https://github.com/lawl/NoiseTorch/releases/tag/0.11.6" rel="nofollow">https://github.com/lawl/NoiseTorch/releases/tag/0.11.6</a>
The actual content is here: <a href="https://github.com/lawl/NoiseTorch" rel="nofollow">https://github.com/lawl/NoiseTorch</a><p>The article just (badly) sums up what you can find in the release notes, the issues, and the readme.
One of the largest risks of project-owner compromise to everyday users and businesses would, I think, be from widely used software where automated updates occur.<p>That leads to an argument for updates being performed manually after inspection of the changes involved.<p>Counter-arguments could include:<p>- Users will not care to see what has changed in an update<p>- Security updates are important to roll out immediately<p>Responses to <i>those</i> could include:<p>- Automated update rollout to the majority of users could be conditional on a smaller, inspective subset community of users manually examining and approving the update first (not too dissimilar to a Quality Assurance process). In the context of project owner compromise like the example in the article, this should catch the issue and prevent rollout to users. If an update is approved "with concerns", then the review community is likely to share those concerns with a wider audience, leading to awareness and hopefully resolution.<p>- Security updates could be rolled out more quickly -- but with a requirement for sign-off by multiple security-focused engineers and product specialists. That could help to reduce exploit exposure time for users while providing for adequate review of changes (security fixes can, in themselves, be challenging to review and confirm).<p>Also potentially relevant to this topic: how would a community that uses proprietary software develop confidence in an update before choosing to apply it locally?
> a key-infra open source project<p>then proceeds to mention a project which is not officially packaged/distributed by any of the major distributions.
In my opinion there is not a lot of difference between a vulnerability that is introduced intentionally and one that is introduced unintentionally regarding their "life cycle".<p>Trust is always relative. Just as in commercial software, trust in the original authors is never total and can only grow with continuous verification and non-exploitation.
Everyone should PGP sign their git commits with a secret key stored on a YubiKey. Make small changes to your code, read the diff, then commit and sign before pushing to the repo. IMPO, that's really the only way to protect the integrity of source code.<p>If you are adding large changes without carefully reading the diffs and you do not sign the commits it's just a matter of time.
Oof, not a great situation. I hope the devs can do an audit and confirm their code looks good. The C code and models are the only thing that needs scrutiny.<p>However, if someone wanted to use this code immediately they could run it in a qemu VM and forward a port or something.
It's a bit unclear as to what's going on there.<p>Is the codebase itself compromised? Did the developer's computer get compromised?<p>Did one of the external libraries that it pulls in from git get compromised?