Best practices for increasing code coverage

  • which versions are you using SonarQube
    • Sonarqube 6.7.6.38781
  • what are you trying to achieve
    • Find best methodologies to reasonably increase code quality/coverage
  • what have you tried so far to achieve this
    • Attempted to come up with our own plan

Background:

  • We would like to be able to set and track reasonable goals towards increasing code coverage/quality on new code
  • We have a mechanism that allows us to set a threshold for coverage % increase on new code before a build fails CI.
    • We originally planned to set the threshold based on historical ‘code coverage on new code’ values.
    • For example, if we noticed that for the last 4 months we seen the following code coverage values on new code: Nov - 20%, Dec - 10%, Jan - 25%, Feb - 15%: 17.5 % average
      So we could set a 19.5% threshold = goal being to increase code coverage by 2%
    • Problem with this approach: sonarQube does not store historical ‘code coverage on new code’ values nor sees a reason to do so.
      • Curious why SonarQube does not see any point in storing these values
      • Since our plan is not supported, we’re curious what other teams/companies are doing.

Overall: In SonarQube, what should we track / measure to improve overall code quality?

.
.
.

  • Another set of questions are related to portfolios.
    • Is it possible to adjust the homepage of sonarQube to display a specific portfolio?

      • We created a org-charge like portfolio tree and wanted to have this displayed as the homepage for visibility purposes.
    • Is it possible to show a code coverage metric within a portfolio overview?

      • Sort of like the screenshot you’ll see on this plugins mainpage.

Hi,

What we believe at SonarSource, and what we’ve designed the interface to enable, is that you can gradually improve overall quality by focusing on the quality - and in this case the coverage - of New Code. We call it the Clean as You Code methodology, and we’ve created a web page and I’ve written a blog post to explain it.

Basically, just ignore overall coverage and enforce that all New Code has 80% coverage. Gradually - and this was our own experience internally - overall coverage will naturally increase.

 
HTH,
Ann

P.S. It’s best to keep it to one question per thread AND you’ve already asked your other questions elsewhere.

Hi Marco, for legacy code we originally started at “0% coverage on new code”. The 0% limit at least made developers consider tests for this old code even if its just a little bit.

Over time coverage improved and in tandem we have manually increased this check. In effect our % coverage on new code has increased in line with the % total coverage of all code.

Don’t expect it to change quickly, if you keep needing to make changes to the old code it will improve. If you don’t it will not change or you eventually replace that legacy code with something new which will have good coverage checks with sonar from the get go.

Thanks Ann!

I read the article and it all makes sense. From a management perspective, what do you believe is a good way to track the progress?

We would want to be able to run reports to determine if the code coverage against new code is increasing and at what rate.

Currently, it seems there’s no method to see historical values of ‘code coverage on new code’ besides what that percentage is on the current leak period.

Thank you!

Thanks for the reply, Liam!

I think I got confused with the fact that “legacy” and “new” are both used in this sentence:

for legacy code we originally started at “0% coverage on new code”.

Did you mean to say that: for legacy code we originally started at “0% coverage on legacy code”.

If so, what measure in sonarqube are you using to track this metric? Seems it would just be the overall coverage that is being added to I believe? (i.e. anything outside of any coverage being added for new code)

Thank you!

The distinction is modifying legacy code counts as new code for sonar :wink:

Yes we just track overall coverage. We started on 0% overall coverage. As % overall coverage improved we increased the % new code coverage quality gate in line with that.

e.g if % new code coverage quality gate is set to 5%, its very unusual a developer tries to only write the sonar limit of 5% worth of tests, its usually much higher after tests have been written. But it gives the developers the flexibility to determine what is realistic given the state of the legacy code.

1 Like

Hi,

To echo what Liam said, “New Code” is all code that has been added or modified in the New Code period. So we would recommend tracking progress by:

  1. Setting a Coverage on New Code requirement in your Quality Gate. The built-in, Sonar way Quality Gate requires 80% and I think that’s a good place to start.
    Is it reasonable to expect the team to go from 0% to 80% on all code overnight? Obviously not. But they can certainly make sure what they do today is covered at at least 80%. Will that require some refactoring of monolithic, legacy methods? Most probably, so you should factor that into the time budget.
  2. Strictly enforce your quality gate. If excuses or “circumstances” all release to production with a red quality gate, then you might as well not have one.
  3. Sit back and watch your overall coverage gradually increase. This will happen automatically as a natural product of #1. Really. That’s exactly what we saw internally and why we’re believers in and advocates for Quality Gates and Clean as You Code.

With this approach, you don’t need historical values on “New” metrics because

  • you’re not looking for a gradual increase in Coverage on New Code. You’re looking for a green quality gate, and >=80% is required for that. There shouldn’t be any trend here to observe. The number goes up to 80% and stays there.
  • you can observe the impact in the overall coverage numbers and watch the trend there.

Does this make sense?

 
Ann

1 Like

Thanks Ann & Liam! Makes sense! :slight_smile:

A post was split to a new topic: Improving coverage