Successful adoption of SonarQube to improve code quality

Reaching out to the community to see if you can share how sonarqube has helped improve codequality. One of the issues I am running into is push-back from the devs around the “value” this tool provides. The sentiment is static code analysis suggestions generate false positives and doesn’t contribute to improving code quality so it will be ignored.

Wondering if someone has concrete examples around rulesets, success stories, adoption strategies that has worked for their teams and are willing to share with the community.

Thanks in advance.

3 Likes

Hi,

I didn’t jump straight on this thread in order to give other folks a chance to chime in first. After all, it will sound a little biased coming from me.

But I’ll share anyway to get the ball rolling. :smile: I actually have two stories to tell.

Improving quality within SonarSource

Our CEO Olivier Gaudin likes to talk about how we improved our own code quality using SonarQube. The story is a familiar one: there was a conviction that test coverage should be at a certain level, but developing features was the priority. Adding tests was going to come “later”, but there was never time at the end of a release cycle to add them. Coverage gradually declined…

That trend continued until we put a Quality Gate in place and enforced it. The QG required 80% coverage on new code, and the version wasn’t releasable without it. So commit by commit, we started adding that missing coverage. It didn’t matter whether you were adding a brand new method or updating an old one; if you touched it it was “new code” and at least 80% coverage was required.

And gradually, very gradually the overall coverage started creeping up without any kind of mandate or initiative to write tests on old code. From a nadir of 67% in Oct. 2011, we’ve risen to 89.4% overall coverage today. And coverage on new code currently stands at 93%.

Fostering adoption

In my experience, good developers want to write good code. So what I did at my previous job was analyze projects, pick out a few issues from each one - you know, the “oh yeah, that’s gotta be fixed” issues - and go to the best developer on the team to point at the analysis and say “look what I found.”

Usually, there’s a little initial skepticism. That seems to be pretty normal. But when they actually focus on what’s being found in their projects… well, things get easy from there, and two things happen:

  • the developer you won over takes care of bringing the rest of the team along
  • the negotiation about which rules are enabled ensues.

And at that point, you’ve got 'em hooked. :wink:

HTH,
Ann

2 Likes

Complementary to that, it’s important to understand the overall approach to code quality that SonarQube advocates: Fixing the Water Leak .

It’s not just the way SonarQube provides maximum value, it essentially is the most effective way to gradually (as @ganncamp has put it) get on top of your codebase quality. And when it comes to getting devs onboard, it’s a sane approach too:

  • do not put the burden of legacy code on current devs (or even newjoiners), who might have nothing to do with it in the first place, and who are here to write awesome code, instead of spending days fixing old stuff

  • therefore put in place a Quality Gate that will focus on New Code, meaning constantly making sure that the actual code that devs are implementing right now, is pristine. And if SonarQube finds a bug or a vulnerability in that feature branch in that PR, then the dev can only be thankful ('oh sh*t I almost pushed this in prod, thanks for catching it!)

  • and if you push that reasoning further then you can even promote SonarLint to your devs, so that they get feedback right as they code

All in all a lot of it is about helping the dev at the right time, in the right context. That’s what SonarLint/SonarQube/SonarCloud are focused on, especially if you take to understand this ‘Fix the Leak’ approach, and to understand default Quality Gate for example.

That sentiment needs to be understood and discussed. Static Code analysis done right should not generate false positives, and for us all at SonarSource it is a continuous effort of responding to feedback about false-positives (note the #bug:fp category to do so) and make sure the code analyzers we deliver are the most accurate. On user side it also implies a good understanding of what a Quality Profile is, to make sure that the rules you check on your codebase are actually the rules that matter to your developers (and to their development frameworks).

Our team has successfully got sonar running after 6 months over a codebase with alot of technical debt and little coverage (<15%). But the leak is at least now halted.

Journey taken was as follows:

  1. Have sonar running just to see how team would fail the gate but without applying to rest of team. Let us determine integration issues and leak period (5 days in our case so issues are not too old)
  2. Setup a quality gate based on this info and hook sonar into your CI process.
  3. Members made aware of issues as the CI’s start failing.
  4. Members communicate and discuss the issues raised. Tweaks made to quality gates / rules based on discussions.
  5. Teams proactivly start to address sonar issues using (Sonarlint) or writing more tests up front to avoid failing the build.
  6. Quality Gates tweaked/strengthen once general practise improves.
  7. Repeat process indefinately.

Step 4 is important to feeding into the appropiate rules and quality gates. Sometimes its a lack of knowledge and the rules in sonar actually start to teach the team how to code better. But as long as the team can have input on tweaking the rules or quality gates, the team will buy into what sonar is telling them. There will be more discussions at the start which quickly drops off.

Step 2 is also important. Too strict and you will have major push back to disable it on the first day. Too loose and you are not preventing any leaks.

When dealing with writing lots of new code sonars default quality gate would obviously be recommended.

If dealing with modifying lots of old code your have to come up with something that works for your team. In our case the defaults was found to cause a negative amount of time dealing with coverage on really old hard to test code, so we set it up so teams need > 0% on modified code (some sort of unit testing thought about), but overall coverage cannot drop either. This way legacy code will at least stay static or improve but it can’t get worse.

2 Likes

Liam thanks your insight as #2 and #4 hits home. It’s daunting to code smells in thousands when you turn SonarQube on a decent size project for the first time. The devs feel defeated, discouraged to begin with so they give up so a good plan of attack(tackle critical or blockers first) would make it more palatable.

Another question is who owns code quality?
I understand the entire dev team(project leader, lead dev, devs and QA) does but who does it matter the most when things don’t go right. I am more leaning towards the QA on the team with the oversight of EA(technical business owner) ? Basically the QA initiating those conversations with the dev/lead dev and helping the team see potential issues. Thoughts ?

Nic, I agree with your “Fixing the Water Leak” analogy.

Here’s an e.g. of a false positive

SonarQube thinks this is critical but a dev would disagree as this code is not being used. It was a placeholder for a later implementation. Is there a way for SonarQube to tag this as critical depending on usage as opposed to implementation nuances. Yes someone can go in and reduce the severity manually but this is the part the devs hate.

Another e.g. of code smell but not a critical issue.

Technically this JS code works as JS is not strongly typed so ignores the param and the vm is global scope so the intended result is achieved.

We have been pretty successful at Johnson & Johnson, but there are some key things we did to consider as well:

  1. SonarQube is for the developer…not management. We do not break builds. We do not report out on teams to management.
  2. Developers should use SonarQube to up their % of PR approvals on the first try.
  3. Each language needs a champion developer to optimize rules.
  4. SonarQube needs a senior exec engineering sponsor (like me) to keep the “wolves” (project managers, VP types, etc.) at bay.
  5. The leak analogy is great - focus on stopping leaks. Focus on the trends.
  6. Use other tools like Veracode, Qualys, etc. for “management”.

Jeff

5 Likes

@kylix : interesting examples. I would suggest you share your feedback about those rules in distinct topics (e.g. #suggestions) , that way there can a dedicated discussions about those, while this thread remains focused on the higher-level discussion about best-practices and recommendations for a successful adoption of SonarQube.

This is key. No faster way to make devs hate SonarQube than turning it into a tattletale.

At my previous job we had a council per language. The councils were made up of highly respected developers (not necessarily tech leads) from each group.

Ann

1 Like

Hi @kylix.
I hope you’ve further proceeded succesfully already in finding your best approach to use this fantastic tool! Regarding adoption strategies, ownerships, and other topics you’ve pointed to, maybe you want to have a look at this book, this video, or this blog post. I wish you even more fun with this great tool, the wonderful community, and the smart SonarSourcers. :slight_smile:
Best regards
Michael