Metrics categorisation

Hello, complete noob on here. I wish to learn more about the Security/Vulnerability Metrics on the sonarqube. From my understanding the vulnerabilities are scored from A(good) to F(bad) but what I’m trying to understand is what is acceptable and what needs remediating. I guess my question is directed to those who remediate the vulnerabilities as to what category do you live with and what needs fixing. Hope this makes sense, thank you in advance.

Hello @ZombT,

Welcome to the community.

Sonarsource uses the industry standard CVSS scoring methodology. We have SLAs for fixing the vulnerabilities and the security team monitor SLA compliance closely.

Many of the vulnerabilities found are in modules that our software does not reference and these can be ignored. But we do not have a threshold for vulnerability acceptance, we fix everything, it’s easier than arguing over risk. There are occasionally transient exceptions.

We are currently rolling out a new SCA tool and the objective is to make the vulnerabilities public as many customers request this type of information just as you have.

1 Like

Hi,

In addition to Mark’s thorough & excellent answer about our own internal security practices, I’d like to add some detail about how you can interpret the ratings SonarQube raises on your project, in case it’s relevant.

Just to be clear, no vulnerabilities are “good” :joy:. It’s just that some are worse that others. Vulnerabilities raised in your code are given one of four severity ratings: Blocker, Critical, Major, Minor.
Those ratings determine the Security Rating on your project. Per the docs

A = 0 Vulnerabilities
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability

To dig into that a little, you can have only 1 Vulnerability in your project, but if it’s a Blocker, you’re going to get an E rating. Conversely, you can have 500 Minor Vulnerabilities and get a B.

In terms of what you need to jump on fixing and what you can maybe live with for a while. I’d handle it worst-first. So start with any Blockers, then any Criticals. After that I’d say you can slow down, take a breath and assess any Majors.

 
HTH,
Ann

1 Like

Great explanation Ann, thanks. I supposed to change the word vulnerabilities with findings. :sweat_smile:

Very detailed explanation Mark, thanks.

Hello Mark : you mentioned that Sonarsource used the industry standard CVSS scoring methodology. Does that mean that you can get all the identified vulnerabilities to come with a CVSS score, such as:

0.1 – 3.9, Low
4.0 – 6.9, Medium
7.0 – 8.9, High
9.0 – 10.0,Critical

The reports I have seen from Sonar seem to come with a slightly different classification: Blocker, Critical, Major, Minor, Info.

Is that maybe a configuration option?

Thanks in advance for any insight.

1 Like

I am also looking for this claffication on how the SonarQube classification relates to CVVS

Hi,

Welcome to the community, @dmd and @Cor_Zijlstra!

We did not create our severity scheme with an external standard in mind. Nor do we categorize rules (and thus, the issues they raise) with that in mind.

That said, it would probably be fair in most cases to map Critical to Blocker and so on.

Does that help?

 
Ann

Not really :wink:
There must be a logic in your software that determines the severity. I can only imagine one mechanism, and that is by using the CVVS score, what else can it be…
If that’s the case, what CVVS ratings corresponts to your classification?

Hi,

The logic is purely human, assigned per-rule and not per-issue, and happens before the rules are ever implemented. The ‘Default severities’ section in the docs may help, although to be honest I’m not certain it’s up to date with today’s practices.

 
Ann

 
Edited to fix ‘Default severities’ link

Thank you Ann, appreciate the answers!

I believe that Mark’s answer on CVSS at the top of this thread is about how the Sonarsource team works with security (vs what SonarCloud or SonarQube users can expect from the tool).

My question was more on the second aspect: can we (Sonar users) expect Sonar (the tool) to detect CVEs in our code and assign CVSS scores to them. From your last answer I understand this is not something we should expect - thanks for the clarity.

With that clarified, do you know if Sonar is planning on any type of Software Composition Analysis (SCA) tool? Something that would rate the 3rd-party libraries one includes in their code against online CVE databases (and list matching CVSS scores)? Or whether anyone else in the Sonar ecosystem is offering adjunct tools or plugins that would work well with Sonar for consolidated reporting?

Thanks again

Dear Ann,
Thanks for the clarification, just to be sure we are on the same page:
Do you mean that every single vulnerability found in the CVSS database is classified manually with a Sonar severity? Please refer to the attached screenshot: I’ve marked the Sonar severity (Blocker) and the CVVS score (9.8)…

Hi,

I think I’ve heard this talked about in a misty-future kind of context, but no current, firm plans, I believe.

No, that’s not at all what I mean.

SonarQube raises the issues it raises, and one of 5 severities is assigned to each issue.

The issues in your screenshot are not raised directly by SonarQube, but by a 3rd-party tool and imported into SonarQube. If you have questions about those specific issues, you should direct them to the tool/plugin provider.

 
HTH,
Ann