Way back when I saw a report on hackernews about secret exposure from websites that deployed directly via a git repo as a webroot and didn't block access to .git/<p>I added a cheeky message to my site's .git/ folder if you attempted to view it.<p>About 2 or 3 months later I started getting "security reports" to the catch all, about an exposed git folder that was leaking my website's secrets.<p>Apparently because my site didn't return 404, their script assumed i was exposed and they <i>oh so helpfully</i> reported it to me.<p>Got like 4 or 5 before i decided to make it 404 so they would stop, mainly because i didn't want to bring false positive fatigue on to "security exploit" subject line emails.<p>I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.
I'm a command-line development tools maintainer for an OS. I am not unfamiliar with high-level CVEs in my inbox with the likes of "gdb crashes on a handcrafted core file causing a DoS". I am unfamiliar with a real world in which a simple old-fashioned segfault in a crash analysis tool is truly a denial of service security vulnerability, but our security department assures us we need to drop all revenue work and rush out a fix because our customers may already be aware that our product is shipping with a known CVE.<p>There are occasions in which I recognize a CVE as a vulnerability to a legitimate possible threat to an asset. By and large, however, they seem to be marketing material for either organizations offering "protection" or academics seeking publication.<p>I think like anything else of value, inflation will eat away at the CVE system until something newer and once again effective will come along.
Lots of CVEs are illegitimate. You have people creating whole "vulnerabilities" that are just long known features of various technologies. The worst one I'm remembering is the "discovery" of "Zip Slip" and "ZipperDown", which were both just gotchas in the zip format that have been known about for decades now. Both got trendy websites just like Spectre and Meltdown, and loads of headlines. ZipperDown.org is now an online slots website.<p>- <a href="https://snyk.io/research/zip-slip-vulnerability" rel="nofollow">https://snyk.io/research/zip-slip-vulnerability</a><p>- <a href="http://phrack.org/issues/34/5.html#article" rel="nofollow">http://phrack.org/issues/34/5.html#article</a><p>- <a href="https://www.youtube.com/watch?v=Ry_yb5Oipq0" rel="nofollow">https://www.youtube.com/watch?v=Ry_yb5Oipq0</a>
I think this goes hand-in-hand with people naming security vulnerabilities and trying to make it a big spectacle. Sometimes it is a legit serious vulnerability, like shellshock or heartbleed, but a lot are just novices trying to get their 15 minutes of fame. I remember a few years back there was a "vulnerability" named GRINCH, where the person who discovered it claimed it was a root priviledge escalation that worked on all versions of Red Hat and CentOS. They made a website and everything for it, and tried to hype it up before disclosing what it was. Turns out the "vulnerability" was members of the wheel group being able to use sudo to run commands as root.
I remember when people in the security community started filing CVEs against the TensorFlow project, claiming that code execution was possible with a handcrafted TensorFlow graph, and the team would have to try and explain, "TensorFlow GraphDefs <i>are</i> code".
I understand the frustration, and I'm pretty sure the root cause is straightforward ("number of CVEs generated" is a figure of merit in several places in the security field, especially resumes, even though it is a stupid metric).<p>But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.<p>If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.<p>(Don't get me started on CVSS).
The whole problem is that at some point people started seeing CVEs as an achievement, as "if I get a CVE it means I found a REAL VULN". While really CVEs should just be seen as an identifier. It means multiple people talking about the same vuln know they're talking about the same vuln. It means if you read an advisory about CVE-xxx-yyy you can ask the vendor of your software if they already have a patch for that.<p>It simply says nothing about whether a vuln is real, relevant or significant.
I feel this is the consequence of paying people for security bugs reporting (and only <i>security</i> bugs reporting). People start to inflate the number of reports and no longer care about proper severity assignment as long as it get them that coveted "security bug" checkbox. I mean I can see how bounty programs and projects like hackerone can be beneficial, but this is one of the downsides of it.<p>CNA system actually is better since it at least puts some filter on it - before it was Wild West, anybody could assign CVE to any issue in any product without any feedback from anybody knowledgeable in the code base and assign any severity they liked, which led to wildly misleading reports. I think CNA at least provides some sourcing information and order to it.
Didn't check who filled those bugs, but I've seen companies requiring having discovered CVE to apply for some jobs, and the natural consequence is gaming the system...
How to mark a CVE as invalid or request an update?
I tried the Update Published CVE process, but nothing happened not even a reject, just no answer.
Multiple CVEs where reported to OpenWrt which are invalid, but we (OpenWrt team) haven't found out how to inform Mitre.<p>For example CVE-2018-11116:
Someone configures an ACL to allow everything and then code executing is possible like expected:
<a href="https://forum.openwrt.org/t/rpcd-vulnerability-reported-on-vultdb/16497/4" rel="nofollow">https://forum.openwrt.org/t/rpcd-vulnerability-reported-on-v...</a><p>and CVE-2019-15513:
The bug was fixed in OpenWrt 15.05.1 in 2015:
<a href="https://lists.openwrt.org/pipermail/openwrt-devel/2019-November/025453.html" rel="nofollow">https://lists.openwrt.org/pipermail/openwrt-devel/2019-Novem...</a><p>For both CVEs we were not informed, the first one someone asked in the OpenWrt forum about the details of this CVE and we were not even aware that there is one. The second one I saw in a public presentation from a security company mentioning 4 CVEs on OpenWrt and I was only aware of 3.<p>When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long. It would also be nice to update them later to contain a link to our detailed security report.
We get dozens of "high-priority" security issues filed that are resolved with "we're an open-source project; this information is public on purpose".<p>Our bug bounty clearly outlines that chat, Jira, Confluence, our website - all out-of-bounds. Almost all of our reports are on those properties.
Mitre is a us gov supported team, and previously they could not scale to the need of their efforts. They did the best they could, but they still had a lot of angry people out there. The whole world uses CVEs but it is US funded by the way.<p>In come new CNAs, scale the efforts through trusted teams, which makes sense. The mitre team can only do so much on their own.<p>Unfortunately I don’t think anyone will be as strict and passionate about getting CVEs done right, like the original mitre team has.<p>Here is to hoping they can revoke cna status from teams who consistently do not meet a quality bar.
So... the real question is, why are CVEs that are just packages of software being accepted to the CVE database anyways? If its in a Docker image, it should be immediately rejected: report the CVE for the precise upstream project instead.
That sucks. Perhaps the most annoying part of modern infosec is the absolute deluge of noise you get from scanning tools. Superfluous CVEs like this contribute to the sea of red security engineers wake up to when they look at their dashboards. Unsurprisingly, these are eventually mostly ignored.<p>Every large security organization requires scanning tooling like Coalfire, Checkmarx, Fortify and Nessus, but I've rarely seen them used in an actionable way. Good security teams come up with their own (effective) ways of tracking new security incidents or vastly filtering the output of these tools.<p>The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software. This is especially the case if you have significant third party JavaScript libraries or images. And unfortunately you can't just literally ignore it, because infrequently one of those red rows in the dashboard will actually represent something like Heartbleed.
Communication breakdown.<p>It's a bit naughty how "security researchers" don't appear to make a good effort to communicate upstream.<p>And the fact that Jerry has problems reaching out to NVD or Mitre is worrying.
See additional context in this issue in docker-library/memcached: <a href="https://github.com/docker-library/memcached/issues/63#issuecomment-747732668" rel="nofollow">https://github.com/docker-library/memcached/issues/63#issuec...</a><p>And this issue in my docker-adminer: <a href="https://github.com/TimWolla/docker-adminer/issues/89" rel="nofollow">https://github.com/TimWolla/docker-adminer/issues/89</a>