SonarQube Standards

Hello,

If one organization has to specify certain standards or guidelines for using SonarQube as static code analyzer, what they could be ?

Hi,

Welcome to the community!

This is an extremely broad, open question, but as I think about it I realize that the basics are surprisingly simple:

  1. Every project (for which language support is available) analyzes on a regular basis, where “regular” is more frequent than “monthly”. Ideally this would be after every commit.
  2. No project releases with a red / failed Quality Gate.

Beyond that there can be realms of subtleties and team discussions, but ideally:

  • all teams use the same Quality Gate - everyone is held to the same standard
  • your Quality Gate focuses on New Code measures - make sure the code you write today is good and don’t penalize teams who happen to be working on legacy projects

HTH. It should at least get you started.

 
:slightly_smiling_face:
Ann

3 Likes

@ganncamp thank you. Just a general question- Different types of code can adhere to different quality gates.In this case how can I ensure specifying a common quality gate across the entire organization and converting it into a standard to be followed?

Hi,

Ah ha! I’m glad you asked!

If you tried to set a Quality Gate / Release criterion of 0 Bugs, then that would probably be doable with some effort for projects started in the last year or so. At the same time, anything over five years old would never be allowed to release again! The teams working on those old projects would be penalized because of the project history.

But if you say “let’s ignore the past” and focus on New Code, i.e. 0 New Bugs, then that’s a standard everyone can meet. You’re saying to all the developers: We won’t hold what’s already in production against you. Just make sure that the code you write today is clean.

And that’s a fair standard that everyone can meet.

Now… you’ll find that Coverage on New Code is in the default Quality Gate. Being realistic, I don’t think you have unit test tooling available in every language. It would be unfair to tell the ABAP folks (to pick a language at random) that they had to meet a unit test coverage standard when they might have to build the coverage tooling themselves from scratch. (And then add tests on the tooling…? :dizzy_face:). So in that particular case, it would be reasonable to have a second Quality Gate for a small subset of projects that used every criterion from the main Quality Gate except Coverage on New Code.

But in general by enforcing standards on only the changes then it is quite practical to come up with a standard that everyone can meet.

Hopefully this helps. If there are specific cases where you’re still not convinced, let’s hash through the details.

 
:smiley:
Ann

2 Likes

I agree ignoring the past is a nice practical measure to make sure everyone can meet the rule, but it’s somewhat unfair for those projects with a “good past”. I mean, if my project has a coverage above expectations, I don’t want to be forced to achieve a certain percentage at every single change. Some changes may be affect unrelevant parts and I prefer to focus my testing efforts in some other areas as long as the total coverage does not fall below expectations.
So, in a typical company with coexistance of legacy and new projects, I think it may be appropriate to have different Quality Gates depending on the current status of a project, i.e. “good” projects having the rule for whole coverage rather than new code.
Does it make sense to anyone?

Hi,

does make sense.
IMO there has to be a specific quality gate for legacy projects, means using only ‘new’ conditions,
e.g. no new Blocker/Critical/Major issues.
New projects should start clean, means no Blocker/Critical/Major issues + (60-80 % coverage)
WRT to the code coverage, we don’t use this condition for legacy projects.
Most of these projects never really used unit tests, so it doesn’t make sense to enforce
a coverage of 80% for new code now.
Otherwise you are not obliged to use the coverage as criteria for your quality gate, however it’s
evaluated nevertheless, so you can track it.
Finally you may use in- and exclusions for specific parts of the code, search for ‘coverage’:
https://docs.sonarqube.org/latest/project-administration/narrowing-the-focus/

I tend to disagree with this – when you don’t enforce Coverage on New Code… you risk no longer maintaining the Coverage metrics you’re so proud of (as they slowly slide down underneath the threshold). If you are changing this code, what can makes it irrelevant (and not worthy of being tested?)

Mmh, I admit this “degradation effect” sounds familiar to me. This might suggest enforcing a certain degree of coverage on new code to prevent developers to simply ignore testing until they lose all the margin.
I also agree “irrelevant” is not the most appropriate word. “Error prone” reflects better the idea (i.e. the developer knows when a given commit encompasses high risk and should be able to balance testing efforts based on this knowledge).
All in all, despite these nuances I still claim it makes sense to have distinct quality gates for new and legacy projects (at least in my current environment and I suspect it’s quite a common scenario).

Hi guys,

I’m back from a long Thanksgiving break and full of turkey and beans!

Because everyone can always be trusted to do what they should and never bow to deadline pressure, and no one ever cuts corners “just this one time”… :joy:

Uhm… no time like the present to get started?


Okay, beyond the obligatory push back, I will say that internally we do have multiple Quality Gates. There is the base/default QG with conditions on New Code plus a minimal requirement on Security and Reliability on overall code. And then we have additional QGs that increase expectations [1], [2], [3]. BTW, those additional QG’s were requested by the developers. What I’ve seen not just here at SonarSource but also in my previous job is that sometimes machismo kicks in and you get a healthy competition going to meet better and better standards. That’s where these extra QGs came from.

For full disclosure, we do have a QG without a Coverage requirement which is used only for projects where a large portion of the code requires interaction with external systems. Projects are allowed to use this exceptional QG only after showing that getting the required coverage with unit tests simply isn’t feasible. Perhaps we should be feeding in the IT reports, but… that’s an issue for another day. :smile:

So even though we preach the gospel of holding everyone to the same standards we do make an exception for what I’ll call hardship. Aaand in general, we do walk the walk and hold nearly everyone to the same minimum standards. And if they want to hold themselves to higher standards, well… we let 'em.

 
:smiley:
Ann

Hi Ann, thanks for the follow-up. Indeed these practical details are very helpful to understand the big picture. Thanks for sharing them (btw very fun names! I hope we can play in the champions league some day).

Just one more question that is not directly related but that also has to do with practical guidelines to improve quality. It’s about assertions in tests. I mean, sometimes a code with good coverage fails to detect simple bugs, and the reason is that there are trivial assertions or no assertion at all (i.e. the test executes many lines of code but does not verify whether they behave as expected). I guess TDD could be a long term answer for that but we’re still far from this level. Is there any simple recipe to check that tests are really testing something? (sorry if it’s a well-known subject, I’m new to this forum)

Hi @marcelubach,

I’m glad you asked! :smiley:

In fact we do have some rules specifically for tests. Unfortunately, I’m not sure many have been implemented for languages other than Java, but at one point an objective to give tests more love was on the table for next year. No idea whether it will make the cut, but if you need these types of rules for other languages, feel free to agitate for them here in new threads.

 
HTH,
Ann