How do you configure your Quality Gates?

How do you configure your Quality Gates?

One of the core features of Sonar that enables developers to write Clean Code is the Quality Gate, which acts as the key indicator for whether or not your code can be merged or released.

The built-in Quality Gate (the Sonar Way) is a great starting point for users, with a focus on Reliability, Security, Maintainability, Coverage, and Duplication on New Code.

Users can also define their own Quality Gates, to meet the requirements of teams with different requirements or varying levels of maturity.

On Sonar’s own instance of SonarQube, we define six Quality Gates beyond the built-in Sonar Way:

  • SonarSource Way – This Quality Profile, which serves as the foundation for the rest (and the default for projects not otherwise assigned a Quality Gate), raises the Coverage on New Code threshold to 85% and adds conditions on Overall Code to ensure there are no blocker bugs or vulnerabilities in the entire codebase
  • SonarSource Way - CFamily – C/C++ Developers always have… unique needs.:wink: In this Quality Gate, the threshold for duplication in this Quality Gate has been significantly raised, and all that’s required of coverage is that some amount is reported
  • SonarSource Way - Champions League – A stricter coverage condition is used (90%)
  • SonarSource Way - LT Unicorn League – The team responsible for developing Sonar’s analysis engine uses this Quality Profile and has decided to hold themselves to the high standard of 95% Coverage on New Code, along with 0 Blocker or Critical Issues (to catch Blocker/Critical Code Smells, which won’t always violate the Maintainability Rating condition)
  • SonarSource Way - SonarQube Team – The team responsible for developing SonarQube uses this Quality Gate, and it is identical to the LT Unicorn League… :thinking: Maybe we have some consolidation to do!
  • SonarSource Way - Without Coverage – We have a few projects with a significant amount of integration code, and as a result test coverage is not a strict indicator we use to release or merge, so we remove that condition while keeping the rest.

:warning: While you can define as many Quality Gates as you want, we believe that it’s important for organizations to only create a few and broadly hold teams to the same standard (so that there’s a common definition for code that meets organizational standards to be merged or released)

Now that you know how we define our Quality Gates at SonarSource, we’d like to hear from you! Tell us about…

  • How many Quality Gates are you using in your organization?
  • When do you decide to create a new Quality Gate?
  • What conditions are you typically adding/removing/adjusting?
  • Do most projects use the same Quality Gate or does every team try and define their own standards?

Or maybe the built-in Sonar Way meets all your needs and all your projects stick to it. Let us know!

5 Likes

At ASSA ABLOY we’ve tried to keep down the number of Quality Gates but we’ve identified the need for at least three:

  1. The default, “Leak”, suitable for most projects, which checks only the leak (>80% coverage, no blocker/critical issues, reliability/security/security review A)
  2. “Leak + Overall”, suitable for mature projects that are already passing “Leak” and want to step up their efforts. It’s a superset of “Leak” and also analyzes the total code base with slightly less strict checks for the overall codebase.
  3. “Safety-critical” for projects where no mistakes are allowed (>90% coverage, 0 minor/major/critical/blocker, reliability/security/security review A)
2 Likes

Hi @Colin,
In my organisation, we use 5 levels of Quality Gates like the nutri-score

  • Best quality
  • Good quality
  • Medium quality
  • Minimal quality
  • No quality

Each technologies / languages has is own dev standards and associated Quality Profile so the 5 levels of Quality Gate could be adapted for mobile, front java, back end cobol …
Conditions in the Quality Gate are always the same : Coverage, duplication, blockers, criticals, majors.

Every team could choose the Quality Gate for their apps.

image

1 Like

I have a little bit idea of it. But I am open to learn and gather knowledge. Thank you so much!

We have two Quality Gates - one with code coverage and one without.
We only really want to have one, which would apply across all teams and all languages but there are some mature codebases where it is practically impossible to add tests to them, some teams that simply do not have any tests at all at the moment, and some teams where they rely soley on integration tests.

The first two cases are ones which are theoretically “solvable” with time and effort and refactoring (of code and teams!) but the third one seems a bit harder - there is nothing inherently wrong in only relying on (often end to end) integration tests (ignoring the longer feedback loop/harder to pin down which change broke what)…but you cannot collect coverage stats at build time, or indeed at all really, if the thing you are testing is a deployed instance of a service for example.
I am open to ideas on how to solve that :slight_smile:

The metrics in the two profiles other than that are the same - Security, reliability, Maintainability A and hotspots reviewed 100 % for new code, security rating A and hotspots reviewed 100%, reliability B for existing code.

We do report on codesmells but they are rather more aspirational I guess - the key thing is no bugs, no vulnerabilities, no hotspots unchecked.

@Colin I am guessing from your description that you only vary code coverage % across profiles and not other aspects - essentially what we are doing.
I can’t really see why teams should be happy with some projects being less secure, less buggy etc than others…and over time I would hope that “muscle memory” means that engineers are automatically writing better quality code because they get so used to fixing issues reported by Sonar…and over time fewer are reported because the code starts off better.