> First of all, a linear “score” like CVSS just cannot work in cybersecurity. Instead, we should have a system on the attributes of a vulnerability.<p>This is exactly what CVSS is: a scoring system based on attributes.<p>> In the first category, we might have attributes such as: Needs physical access to the machine, Needs to have software running on the same machine, even if in a VM, Needs to run in the same VM.<p>This is exactly what the AV vector in CVSS is.<p>> In the second category, we might have attributes such as: Arbitrary execution, Data corruption (loss of integrity), Data exfiltration (loss of confidentiality).<p>This is exactly what impact metrics in CVSS are.<p>I fear the author has a severe misunderstanding of what CVSS is and where the scores come from. There's even an entire CVSS adjustment section for how to modify a score based on your specific environment. I'd recommend playing around with the calculator a little to understand how the scores work better: <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator" rel="nofollow">https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator</a>
Less than 24 hrs into it, and now we have two problems from this defunding fiasco:<p>All the original problems that exist within CVE.<p>"Let's just reinvent the wheel!"<p>Yes, you have a dev background, which entitles you to an opinion, and you also have good intentions. This road is noble. However, the crux of this disaster is not technical, it's political. Maybe reinventing the wheel will be a huge success. Maybe it can wear the crown of free and open source for a while. But it's much more likely this fails as things become difficult to maintain, and you become tired, or poor, and are forced to stop, with nobody, or even worse, an enemy (this is an internationally critical database) controlling the database. So let's focus on solving the original funding disaster without jumping to forking and fracturing as a knee-jerk solution.
So if I understand it correctly, the blog author proposes to create a professional certification, require companies that produce software to have at least one of this certified individuals be responsible for reporting vulnerabilities of the companies software, complete with creating authorities that issue such certifications, training and also compliance enforcement.<p>And all this to fix a broken CVE system? I assume that the friction this generates has a bigger negative impact on the overall ecosystem than the non-optimal CVE system that exists right now.
I feel like requiring software "engineers" to be actual capital E Engineers would fix a lot of problems in our industry. You can't build even a small bridge without a PE, because what if a handful of people get hurt? But on the other hand your software that could cause harm to millions by leaking their private info, sure, whatever, some dork fresh out of college is good enough for that.<p>And in the current economic climate, even principled and diligent SEs might be knowingly putting out broken software because the bossman said the deadline is the end of the month, and if I object, he'll find someone who won't. But if SEs were PEs, they suddenly have standing, and indeed obligation, to push back on insecure software and practices.<p>While requiring SEs to be PEs would fix some problems, I'm sure it would also cause some new ones. But to me, a world where engineers have the teeth to push back against unreasonable or bad requirements sounds fairly utopian.
><i>"So yes, I get it: we shouldn’t trust companies, or even FOSS projects, to self-report.<p>Unless…what if we made penalties so large for not reporting, and for getting it wrong, that they would fall over themselves to do so?"</i><p>We know this doesn't work, and author admits as much.<p>However, the proposed solution is to add another cert into the mix. But it's not clear how this designation would be applied globally, with agreement across the globe on the requirements, punishments, etc. Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US. (Not to mention, I really don't want <i>another</i> cert)
> This idea I had months ago will surely fix all the problems I just started thinking about today.<p>I very rarely find myself agreeing with some take the author has made. To the point where I almost said never agree. But I always read though, because even though the suggestion is always surface level, it's also always well written and well expressed. I like the help in reasoning through my own thoughts, and his musings always give a good place to start explaining and correcting from.<p>I hate, with a passion, CVE farmers. Because sa much of it is noise these days. But everyone complaining^1 so far have all completly missed the forest for the trees. The reason everyone uses CVEs still is because the value from having a CVE was never to know the severity. (The difference between unauthenticated remote arbitrary code execution, and might create a partial denial of service in some rare and crafted cases, is 9.9 and 9.3) The value has always been the complete lack of ambiguity when discussing some defect with a security implication. You don't really understand something if you can't explain it, you can explain it if you don't have the words or names for it. CVE farming is a problem, but everyone uses CVEs because it makes defects easier to understand and talk about without misunderstandings or miscommunication.<p>I'd love to see whatever replaces CVEs included a super set, where CVEs, also have CRE, where Vulnerability is replaced by Risk and only when [handwavey answer about project owner agreement], which would ideally preserve the value we get from the current system. But would allow the incremental improvement suggested by the original comment this essay is responding to. I would like my CVEs to be exclusively vulns that are significant. But even more than I want that, I don't want to have to argue about where the bar for significant belongs!<p>No company <i>wants</i> to manage CVEs, there's nothing that's going to meaningfully change that in the short term. Which means no one is looking for a better CVE system. Everybody wants the devil they know, I have complaints about the CVE system. But don't want to try to replace it without accounting for how it's used, in addition to how it works (and breaks).<p>1^: it's still early, and the people rushing to post are often only looking at the surface level. I'm excited to hear deeper more reasoned thoughts, but that's likely to take more than just 24h
Related ongoing threads:<p><i>CVE Foundation</i> - <a href="https://news.ycombinator.com/item?id=43704430">https://news.ycombinator.com/item?id=43704430</a><p><i>CVE program faces swift end after DHS fails to renew contract [fixed]</i> - <a href="https://news.ycombinator.com/item?id=43700607">https://news.ycombinator.com/item?id=43700607</a>
Funding to Mitre's CVE was just reinstated:<p><a href="https://www.forbes.com/sites/kateoflahertyuk/2025/04/16/cve-program-funding-cut-what-it-means-and-what-to-do-next/" rel="nofollow">https://www.forbes.com/sites/kateoflahertyuk/2025/04/16/cve-...</a>
I tried to read this with an open mind, but I think the poster is talking about a lot of problems that are adjacent to CVE (coordinated vulnerability disclosure and vulnerability scoring, primarily) while missing the primary value that CVE provides (a consistent vocabulary to talk about vulnerabilities and a centralized clearing house for distributing vulnerability data) and as a result their proposed solution misses the mark.<p>The article quotes a lobsters post approvingly:<p><pre><code> 1. We end up with a system like CVE where submitters are in charge of what’s in the database other than egregious cases. This is what MITRE supported as the default unless someone became a CNA, something they’ve been handing out much more freely over the last few years to address public scrutiny.
2. We end up with a system not like CVE where vendors are in charge of what’s a vulnerability. This seems to be what Daniel and others want.
</code></pre>
I guess the first problem with this is that the CNA system very much puts vendors in de facto control of what goes in the database. But, this description of CVE-like systems is missing the forest for the trees, in that the alternative to CVE is not one of the two scenarios described, but the wild-west situation that existed before CVE, where vulnerability info came from CERT, from Bugtraq/Full Disclosure/etc., and from vendors, often using wildly different language to describe the same thing.<p>The whitepaper[0] that led to the CVE system described a pretty typical scenario:<p><pre><code> Consider the problem of naming vulnerabilities in a consistent fashion. For example, one
vulnerability discovered in 1991 allowed unauthorized access to NFS file systems via
guessable file handles. In the ISS X-Force Database, this vulnerability is labeled nfs-> guess
[8]; in CyberCop Scanner 2.4, it is called NFS file handle guessing check [10]; and the same
vulnerability is identified (along with other vulnerabilities) in CERT Advisory CA-91.21, which
is titled SunOS NFS Jumbo and fsirand Patches [3]. In order to ensure that the same
vulnerability is being referenced in each of these sources, we have to rely on our own
expertise and manually correlate them by reading descriptive text, which can be vague
and/or voluminous.
</code></pre>
That, and a central clearing house, are what is at stake if a system like CVE disappears, and I fail to see how any professional licensing scheme -- unless the licensing body replicated the CVE system or something like it -- would do anything to address that.<p>parliament32's comment in this thread perfectly addresses the issues with the articles treatment of CVSS, so I'll not rehash that here, other than to say that the actual score output of CVSS is bad and the people who designed it should feel bad.<p>0 - <a href="https://www.cve.org/Resources/General/Towards-a-Common-Enumeration-of-Vulnerabilities.pdf" rel="nofollow">https://www.cve.org/Resources/General/Towards-a-Common-Enume...</a>