SonarQube: Hotspots feature parity with issues

It would be great if Security Hotspots in SonarQube would have feature parity with Issues.

Two examples:

  1. While it’s possible (and even easy) for me to find all projects that have at least 1 Vulnerability (on the SonarQube home page showing all projects, I click on Security Rating → B), it appears to be impossible to find all projects that have at least 1 security hotspot (I can click on Security Review → B, but that does not show me the projects that have 80-99% of the hotspots reviewed).
  2. While it’s possible to list and filter all types of Issues across projects (I simply click on Issues in the main site navigation, and then the filtering is accessible from the left hand navigation), it appears to be impossible to see and filter all Security Hotspots across projects.

Perhaps the features are there, and I just don’t see them?

FYI: We’re using SonarQube Developer Edition 8.7, and we’re planning an upgrade to 8.8 or 8.9 soon.

Hello @ernstdehaan,

Thanks for the feedback and I would like to get more details about your use case.

The Security Hotspots are targeting developers and we expect them to perform a daily review of the Security Hotspots so they learn from the product and the documentation we provide so that the overall security of the code is strengthen.

Would you be able to explain in which functional context do you need to find all the projects in your company having at least 1 Security Hotspot? Same question for “see and filter all Security Hotspots across projects”.

Thanks
Alex

Hello @Alexandre_Gigleux , as @ernstdehaan has not provided details on his use case, I’ll share mine…

My understanding of

Security Review Rating ( security_review_rating )

The Security Review Rating is a letter grade based on the percentage of Reviewed (Fixed or Safe) Security Hotspots.

A = >= 80%

But if there are zero Security Hotspots, it’s considered (the equivalent of ) 100% reviewed and gets a letter “A”.

In a Compliance and Oversight role, we want to be able to find which if any of our thousands of projects are introducing Security Review issues, regardless of whether they have been reviewed. The reporting model seems to be based on diligent developers. If we had only those kind, then I probably wouldn’t be too worried or at all since they’d probably not be introducing them to begin with !

Right now, we can’t tell based on letter grade, which teams are producing clean code vs which are generating issues but reviewing and resolving right away.

Unlike the other measures, Security Review is the only one that filters on an indirect calculation and not the raw data (# Hotspots).

Even worse, the calculation is purely numeric ( To_Review Hotspot /(To_Review Hotspots + Reviewed Hotspots) and ignores the recently introduced “Review Priority”.

I want to be able to identify and focus on the team that just introduced 24 HIGH Security Hotspots and not the team that has 180 LOW outstanding, especially if both teams have reviewed only 10 of them.

There is no way to see this information across all projects as one can with Vulnerabities, which is what Reviewed Hotspots become, or what priority / severity they are.

I am certain I submitted a feature request on the priority aspect, but can’t locate it now.

Hello,

Thanks for the detailed answered. I read it multiple times to be sure to understand what you want to achieve.

On my side, I don’t see the problem of having teams generating Security Hotspots and reviewing them right away. It’s exactly what we expect developers to do - take in charge the security review activity to own the security aspect of a software and no rely on another team to do that. At the beginning of this journey, developers will write code on which SonarQube will raise Vulnerabilities and Security Hotspots and the more they use it, less findings will be raised because they would have learned thanks to the feedback and documentation provided.
What is important to me, is to make sure a change is happening in how developers consider Security Hotspots and this is why the Security Hotspots Review Rating was introduced. To measure if the process of reviewing the Hotspots is correctly followed. If today it’s “E”, the goal is to move it to “D” and them “C”, … If it’s always “A”, it means the team is used to the process and everything is under control.

Today with the Portfolio features, you can create a Portfolio containing all your projects and see which project is having trouble to follow the process by looking at the value of the Security Review rating. I agree you won’t be able to distinguish between the projects with 80% and 100% rates but do you need that now? Don’t you have enough to do with all the projects with a rating lower than “A”.
Did you try to rely on that? Why is this not working for you?

Alex

Sorry for the late response.

The use case that Ian describes is very similar to mine. I’m in a Governance role with a SonarQube instance containing hundreds of projects. I check the SonarQube instance a couple of times a week, to understand:
(1) which projects have any Blocker or Critical issues, and
(2) which projects have any Vulnerabilities, and
(3) which projects have any unreviewed Security Hotspots.

First two are easy to answer, last one is very difficult (especially if a project has >80% of their Security Hotspots reviewed, but <100%).

My workaround is currently that I’m getting the information via the API, and then publishing it to Datadog.

BTW, currently we are on SonarQube 9.0, but in this area nothing much changed.