I've always thought that dependabot was busy-work, a waste of time. This article makes a good point that drives it home: Alarams that aren't real make all alarms useless. Dependabot is especially painful in non-typed languages (Python, Ruby, and especially Javascript) where "upgrading" a library can break things that there's no way to know until production.<p>Maybe the constant work, extra build time (and cash for all that), and risk of breaking production, is worth it for the 0.01% of the time there's a real vulnerability? It seems like a high price to pay though. When there are major software vulnerabilities (like log4j), the whole industry usually swarms around it, and the alarm has high value.<p>I just realized how much CircleCI probably loves Dependabot. I wonder what hit % their margins would take if we moved off it collectively as an industry.
This is a similar mechanism as govulncheck (<a href="https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck" rel="nofollow">https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck</a>), which has been quite nice to use in practice. Because it only cares about vulnerable code that is actually possible to call, it's quiet enough to use as a presubmit check without annoying people. Nice to see this for other languages.
Really nice idea to only show warnings if they are relevant. It's indeed annoying if you need to upgrade lodash only to make your audit tool not show critical warnings because of some function that is not used at all.<p>This is not open source, though? It does make a big difference for some whether you're able to run the check offline or you're forced to upload your code to some service.<p>One feature I'd love in such tool would be to be able to get the relevant parts of the changelog of the package that needs to be upgraded. It's not responsible to just run the upgrade command without checking the changelog for breaking or relevant changes. That's exactly why upgrades tend to be done very late, because there is a real risk of breaking something even if it's just a minor version.
How does this tool go from a vuln. in a library to -> a set of affected functions/control paths? My understanding was that the CVE format is unustructed which makes an analysis like this difficult
Jokes on you I already ignore %100 of them /s<p>I like the promise however how can I trust it completely that the ignored part is not actually reachable? All the languages (except a few) do some magic that might not be detected? At previous work, we were bombarded with dependency upgrades, I can still feel the pain in my bones.
> "Have you ever gotten a "critical vulnerability in dependency" alert and when you look at it, it's something like “XML parser vulnerability in some buried transitive library that you actually use for parsing JSON and therefore aren’t affected by at all?"<p>Stop right there pal.<p>This amateurish risk assessment is part of the problem. How do you know that, say, an XML file cannot be smuggled disguised as a JSON into your app?
I know people won't like this solution but my solution is to use as few dependencies as possible. When I look for a new library, I check its dependencies, the fewer the better, 0 is best. I look through the dependencies as well, even looking into the source of the library and the source of the dependencies. What are they doing, do they have a reason to exist, was the dev being prudent or lazy. Were they coupling things that shouldn't have been coupled.<p>I'll also evaluate if I really need a library. Maybe the thing I need I can do myself in 3-30 lines of code and believe I'm unlikely to run into edge cases for my use case. If so I'll write the code rather than deal with another dependency.
How the hell do you end up with 1644 vulnerable packages anyways?<p>* rhetorical question, JS...<p>It was actually one of the main drivers for me to start using Go instead of JavaScript for server-side applications and CLIs about 8 years ago.
The problem really comes down to data quality in disclosing vulnerabilities.<p>With higher quality data, better CVSS scores can be calculated. With higher quality data, affected code paths can be better disclosed. With higher quality data, unknown vulnerabilities may be found in parallel to the known ones.<p>I don’t think any tool or automation can solve the problem of high quality data. Humans have to discern to provide it. No amount of code analysis can solve that. But it sure can help.
This looks really cool. However, for regulated industries, auditors will never accept "We're not vulnerable to CVE-1234 in Blah-blah-blah Library because our code doesn't use the vulnerable functions." All auditors are concerned with is version numbers.
Even if this was done 100% correctly, I still see a big problem with this approach:<p>The vulnerable function is unreachable now, but that could change with the very next commit.<p>So you are basically trading less work now for more work before the next release (which could sometimes make sense)