I’m using Sonarcloud for a few repos. One in particular has 171 open issues and gets a rating of “A”.
The breakdown is:
High: 56
Medium: 99
Low: 13
Info: 3
I don’t expect and “F”, but I’m confused as to how this would be classified as an “A”. The tool tip says “when the technical debt ration is < 5%” so I guess it’s that, but knowing this code and those issues, there’s something off in that measurement.
There are a lot of empty init.py files, so maybe those are throwing things off, but there are also plenty of other relatively clean files so I suspect it’s my own perception of where the action is in the repo vs the total breadth.
You can read more about how this measurement works here.
I have personally seen very few projects that have anything other than an “A” or a “B” on Overall Code. I think this metric skews towards A and B simply because in a sufficiently large project the divisor (Number of lines of code * Cost to develop one line of code estimated at 30 minutes) will dwarf the numerator even with a large number of issues.
This is less pronounced when looking at New Code, or a PR analysis, where that divisor will be smaller.
This option indeed exists in SonarQube Server, but not on SonarQube Cloud.
I’ll flag this for our PMs – both the calculation itself, and the lack of tunability in SonarQube Cloud. There might be an opportunity for us to do something about it.
Thank you @Colin , it helps to at least have it confirmed so I can stop hunting for configuration options
If there is to be some change in this area, I personally tend to give weight to code smells, complexity, etc when in hotspots in the code. Some hardcoded strings of duplicated code off in a file that hasn’t changed in 2 years isn’t much of a concern. Issues near logic that’s important to the business and getting constant changes is of great concern and much more critical to address. I suppose it’s a kind of risk multiplier based on change rate.