we had configured a quality gate with 8 conditions in total (4 conditions for new code & 4 conditions for overall code).
But if we want to include the information in GitLab via SonarQube API call xxx/api/qualitygates/project_status?projectKey it only shows 7 metrics (I guess ‘new_security_hotspots_reviewed’ is missing.)
What’s probably going on is that if you have 0 new security hotspots… it’s not possible to calculate what percentage of new security hotspots are reviewed, because you’d be dividing by zero.
I don’t disagree that it’s confusing, and I wonder what would make sense for you to appear in the API response for measures that aren’t calculated due to a situation like above.
The metric ‘new_security_hotspots_reviewed’ is given in percentage of reviewed security hotspots on new code. I assumed that 100.0 % is the success rate if …
a) no security hotspot has to be reviewed
b) all pop-up security hotspots were reviewed
Or?
My assumption is:
If I configure 8 conditions, I want to get 8 measurements for those metrics (could be also 0 ) including the status OK or ERROR for the metric.
Did you add that new_security_hotspots_reviewed condition after the last project analysis?
The way api/qualitygates/project_status works is that it returns a “snapshot” of the quality gate evaluation when the project was last analyzed (OK, it’s actually slightly more complicated, but for the sake of this illustration let’s assume that’s how it works ). If you update your Quality Gate definition (e.g., add or remove conditions), this won’t be reflected in api/qualitygates/project_status unless the project is analyzed again.
On my side, when I:
Create a Quality Gate just like the one you show in your screenshot, but omit the new_security_hotspots_reviewed condition.
Assign it to a project and analyze it.
Then update the Quality Gate and add the missing new_security_hotspots_reviewed condition, but don’t reanalyze the project.
If I fetch the JSON payload again, I see the new condition is missing (as in your example).