I didn’t jump straight on this thread in order to give other folks a chance to chime in first. After all, it will sound a little biased coming from me.
But I’ll share anyway to get the ball rolling. I actually have two stories to tell.
Improving quality within SonarSource
Our CEO Olivier Gaudin likes to talk about how we improved our own code quality using SonarQube. The story is a familiar one: there was a conviction that test coverage should be at a certain level, but developing features was the priority. Adding tests was going to come “later”, but there was never time at the end of a release cycle to add them. Coverage gradually declined…
That trend continued until we put a Quality Gate in place and enforced it. The QG required 80% coverage on new code, and the version wasn’t releasable without it. So commit by commit, we started adding that missing coverage. It didn’t matter whether you were adding a brand new method or updating an old one; if you touched it it was “new code” and at least 80% coverage was required.
And gradually, very gradually the overall coverage started creeping up without any kind of mandate or initiative to write tests on old code. From a nadir of 67% in Oct. 2011, we’ve risen to 89.4% overall coverage today. And coverage on new code currently stands at 93%.
In my experience, good developers want to write good code. So what I did at my previous job was analyze projects, pick out a few issues from each one - you know, the “oh yeah, that’s gotta be fixed” issues - and go to the best developer on the team to point at the analysis and say “look what I found.”
Usually, there’s a little initial skepticism. That seems to be pretty normal. But when they actually focus on what’s being found in their projects… well, things get easy from there, and two things happen:
- the developer you won over takes care of bringing the rest of the team along
- the negotiation about which rules are enabled ensues.
And at that point, you’ve got 'em hooked.