SonarQube Clean As You Code and Quality Gates

I’m using SonarQube Enterprise 8.9.1 and I’ve got a question with regards to the Clean As You Code approach and Quality Gates for which I’ve been unable to find a solution.

I’ll start by trying to explain what my overall objective is and I hope it makes sense.

I have branch analysis set up and what I would like to accomplish, is a situation where each new code per branch is cleaner than its parent or previous version.
I’ve set my New Code definition to Previous Version, but this leads me to a question.

I can set my Quality Gate to fail if the number of Bugs is greater than 10, which means that with each new scan, as long I have less than 10 bugs, my Quality Gate passes.

But let’s assume I have run 10 builds or scans on a particular branch, each with 4 bugs and then I have build number 11 with 9 bugs.
Build number 11 will pass, because although it contains 9 bugs, it has not breached the Quality Gate threshold of 10 bugs, but it has now introduced 5 more bugs than builds 1 -10.

Ideally, I would like for build 11 to fail as it’s introducing more bugs than the previous builds.
This way, each new scan is either equal to or cleaner than its previous version, ultimately leading to a place where there are little or no bugs.

I hope this makes sense.

My Question:
Is there a way to ensure that new code is always cleaner than the previous version and doesn’t introduce any new bugs or security vulnerabilities?

Can this somehow be accomplished using Quality Gates or is there some other mechanism in SonarQube for this, or does it not exist at all?

Thank you.

I’m using SonarQube Enterprise 8.9.1 and I’ve got a bit of a puzzle here.
I’m rephrasing an earlier asked question with the hope that this is much clearer than the last.

When a QualityGate is defined in SonarQube, it’s set with Absolute Values. EG: Fail if the number of bugs is greater than 10.

So, as long as there are less than 10 bugs, the QualityGate passes.

What then happens in this scenario?

Scan No1 has 4 bugs: QualityGate PASSES

Scan No2 has 4 bugs: QualityGate PASSES

Scan No3 has 9 bugs: QualityGate PASSES

Technically, Scan No3, has introduced 5 additional bugs but still passes because the threshold for failing the Quality Gate is an absolute value of 10 bugs, which potentially means that new vulnerabilities can be added as long as it doesn’t breach the predetermined threshold.

Is there a way to fail the Quality Gate based on metrics extracted from a previously successful build?

So…

Scan No1 has 4 bugs: QualityGate PASSES

Scan No2 has 4 bugs: QualityGate PASSES

Scan No3 has 9 bugs: QualityGate FAILS

Scan No4 has 3 bugs: QualityGate PASSES

Scan No5 has 4 bugs: QualityGate FAILS

Scan No6 has 2 bugs: QualityGate PASSES

This way, code progressively becomes cleaner and is either equal to or better but never worse than the previous successful build.

Thanks.

Hi,

I’ve combined your two threads since they were on the same topic. In general we try to keep it to one topic per thread and one thread per topic. :slight_smile:

What we use internally is a (stricter) version of what we recommend publicly and what set the default the default to: a Quality Gate based on New Code (:white_check_mark: you’re doing that) that’s concerned with Bug and Vulnerability ratings (i.e. severity) rather than count.

Under your scenario I could add 9 new Blocker Bugs and it would be okay - or at least the Quality Gate would pass. But it would fail for 11 new Info Bugs. I doubt that’s actually what you want.

At the same time, I recognize what you’re saying about the ability for problems to creep in under the radar over time. Personally, I call what you’re asking for “ratcheting conditions” - they can only get tighter, never looser - and you’re not the only one to have asked for them. So far the balance hasn’t tipped in favor of adding them, but I do raise the point internally each time it comes up.

 
Ann

Hi @ganncamp

Thanks a lot for responding.

Seeing as this is functionality isn’t native to SonarQube. Do you by any chance know if it might be possible to somehow do this manually?

I have a bit of a rogue idea, which is to

1. Generate a code report for every build by scraping the /api/measures/ endpoint like so

/api/measures/component_tree?ps=100&s=qualifier,name&component=${Sonar_Project}&metricKeys=ncloc,bugs,vulnerabilities,code_smells,security_hotspots,coverage,duplicated_lines_density&strategy=children" | python3 -m json.tool > codeReport.txt

2. Store this code report in a central repository.
3. Compare the report from subsequent builds against the previous report and then pass or fail the build accordingly.

But this approach means that I would need to get a list of all the files being scanned by SonarQube and the bug types associated with them so that I can write a custom script to do the comparison to ensure that no new bugs or vulnerabilities are being added per scan.

This leads to my question.

Is this possible to somehow extract this information from the SonarQube API?
And by information, I’m referring to just the list of files being scanned and the bugs, issues, and vulnerabilities associated with them.

I feel like I’m trying to build castles in the sky here but I just want to see what’s possible and what isn’t, before I call it a day on this task.

Thank you!

Hi,

The problem with dealing with raw counts is that they’re sums. If I add 4 new Critical Bugs but close 5 old Info Bugs, I’m ahead of the game as far as your numbers are concerned.

And to answer your questions:

I guess the per-file approach is to deal with the loophole I described above? It sounds like a lot of possibly error-prone work and a lot of duplicated storage.

Yes. You should be able to get the list of files/components. And then you should be able to run the issues search by component.

But again, I would look at adjusting the strategy first. Maybe what you want is 0 Bugs in the New Code Period…? That would ensure that no matter how many old Bugs I fixed I still couldn’t add new ones. As a coder, I wince a little at even suggesting that^, but I think that may be what you’re actually after.

 
Ann