"People misapply CVSS" is the crux of the post and all the criticisms (even the ones labeled as something else).<p>The other criticisms section starts with the "You're doing it wrong" commentary and then moves on to discuss two other groups saying what boils down to "You're doing it wrong and the metric is bad because it encourages you to do it wrong", which as a way of demonstrating diversity of opinion is entertaining at least.<p>CVSSv3.1 as a metric is not designed to have a uniform distribution of possible values from 0.1 -> 10.0 and it should not generally be a goal to develop a scoring system that does. It is designed solely to answer the questions of "which issue is more severe" when comparing different issues and to then help direct and prioritize fix work. It is not perfect at this but it is superior to other systems out there, especially when taking the pure severity of a given vulnerability in isolation.<p>I do get that people really do try to sell the idea that it's an infallible metric and that it means something substantially more than it does. It also gets confused often as "X is riskier because its score is higher", which is obviously wrong. If you have an authentication-related product, it's obviously more damaging to discover certain categories of information leakage than it may be to find cross-site scripting issues in general.<p>I think it is correct for a change in scope to have a much more outsized impact on the final score, something the author seems to sort of presume is wrong (referring to it as the "villain" at one point) without really explaining why they believe it is wrong. A scope change essentially means lateral movement to other systems rather than the compromise of a single piece of software.<p>Could a better metric be designed? Sure. I'd like to see some additional degrees of user interaction being accounted for, as just one example. The concept of vectors being Network, Adjacent, Local, or Physical could use some more fleshing out for the modern age, for another.<p>Does that mean alternative approaches are better? Not in my experience. All the alternatives I've experienced basically boil down to "we made our own system, don't publish the calculations, and lots more stuff is critical impact and risk" whenever you get reports. I've literally had third-party pentest teams try to sell me that an Info Exposure that was showing server IPs in a log was a High, because they used their own metric.<p>I'd argue that for what it is intended to do, CVSSv3.1 does a good enough job and that's why so many people have accepted it as a standard.