- which versions are you using (8.9 LTS)
- to have a governance in SonarQube for Won’t Fix & False positive, because dev might abuse this.
I’m not sure what you mean by “maker and checker”
A few things to consider:
- We think developers generally want to do the right thing and write high-quality code. Don’t create a problem where there isn’t one!
- You can restrict who has permissions to mark an issue as false-positive or won’t fix by restricting the Administer Issues permission on a project to certain users
- You can also browse the global Issues tab and filter to false-postive/won’t fix to see what issues have been marked this way.
When I use the terms “maker” and “checker,” I’m referring to groups such as developers (the maker) who start the won’t fix/false positive and managers (the checker) who act as the checker. I’m taking a governance-focused approach.
There is no reviewer process in SonarQube for when issues are marked false-positive / false-negative (if a user has Administer Issues permissions, they can mark the issue).
I’ve moved your post to our “Product Manger for a Day” category.
Thanks for taking the time to share your need.
There’s no short-term plan to address this. Still, I’m interested to understand more about your case. At what stage would you expect reviewers to check for the issues flagged as won’t fix / false positive? For example, would it be a process that you’ve considered enforcing with the PR review?
We have a similar requirement. In our ideal workflow this would fit as part of the PR review process - so whoever is conducting the peer review of the code is also responsible for signing off any new Sonar issues that are not to be resolved right away - whether that is by closing them as Won’t Fix/False Positive or reducing the severity for a specific instance so the issue remains but no longer fails the quality gate. Our team structure is very flat so we don’t want to restrict administering issues to specific individuals, but we would prefer that the peer reviewer is involved in closing any issues so there is an opportunity for discussion and to feedback into the code quality process.
As an example, while everyone wants to right good code, there are often differences of opinion about where the limits should be set for Cognitive Complexity. If a developer’s new code is slightly over the limit and think it is too low they may be tempted to mark as won’t fix, where if the peer reviewer was involved they may pick it up and suggest an alternative approach that would be less complex.
A somewhat related issue we have is that we have some legacy functions that are very complex (100+ Cognitive Complexity). While we obviously want to rewrite these to be better, that is not a quick job and in the short term we need to maintain the code and unsurprisingly code that complex appears quite frequently in bug fixes. Every time a change is made to these methods, a new Cognitive Complexity issue will be flagged against the PR. It would be really great if this only happened when the complexity was increased, where currently even if you reduce the complexity from 100 to 80 that will still fail as being over the limit. This kind of thing drives some of our devs to be a bit too quick on the Won’t Fix button.