Different Quality Gates for Services to Opt-In to (Terraform)

:wave: Hello!

We’ve recently upgraded to SonarQube LTS and we’re kicking off 2024 with establishing minimum requirements for our services. We are a java shop but also heavily use typescript / nodejs, golang as well as python.

We recently started experimenting with a sonarqube terraform provider to configure quality gates as code and we have a working example.

What we’d want to do is establish 3 “tiers” or levels of quality gates that services could subscribe to.

Subject to change (just arbitrary levels):

  • Bronze
  • Silver
  • Gold

We also use backstage to bootstrap new services so we’d also integrate it into that eco-system such that when you start a new service, it comes with the “Bronze” quality gate so to speak.

We’ve mainly focused on code coverage (new coverage) as the differentiating metric, while maintaining the default “Sonar Way” thresholds for other metric types (hotspots, smells, etc).

Are there any recommendations for what each quality gate could look like? For example, Gold would be 100% threshold for new coverage, Silver (80%), etc…

If there’s anyone that has done this approach, please let me know; I’d love to bike shed ideas. Thanks!

Hey @lpcruz! :wave:

Thanks for raising the topic. It reminds me that we do a little bit of this internally (you can check the Quality Gates configured on our own instance of SonarQube here.

This will all depend on the maturity of your dev teams, but we tend to think that 80% is a good start for anybody, and 100% is probably too much for anybody Forget 100% Coverage - Focus on Valuable Testing | Testopia). All that said, we focus on increments of 5% up to 95%.

We especially think 80% is a fine minimum because we’re only asking developers to focus on new coverage (like you are), so the “sins” of the past are forgiven.

1 Like

Hey Colin! :wave:

First, thanks again for the response to this (super long overdue) but we really took this guidance to heart and have started implementing a very similar approach. Specifically, we have 4 “tiers” that increment by 5% starting at 80 and capping at 95%.

I’m curious, and I know it will really depend on teams and such but I’m curious if you have any recommendation or have seen working practices that would help teams “graduate” from tier to tier.

FWIW, we have pretty good observability of the metrics (specifically overall and new coverage trends overtime) so I’m wondering that could be leveraged some how to selectively go to different teams and recommend a higher tier, etc.

Let me know! Thanks!

Hey @lpcruz

Sorry that I never came back with some additional guidance.

I’ll be honest: I don’t have a lot of advice to share about how to “graduate” teams from one Quality Gate to another.

At Sonar, we’ve really let individual teams decide when they want to “up their game” for a specific project. In our view, the “lowest” tier is our company-wide expectation (it’s not the bare minimum; it’s just right). Our goal is not to have every team end up on the highest tier. I think this is a good approach. Otherwise, you’re always chasing what’s next. As I mentioned earlier, there can be diminishing returns.