Variations on Sonar Code Coverage

Hi,

In our sonar dashboard, we observe a variation on the code coverage part after each analysis even there is no changes which has an impact on the coverage. You can have a look at the below screenshot to see the variations.

We use Jenkins pipeline and there are different agents which has sonar scanner. Visual Studio generated test results are used. Even there are xml files which are generated and coverage related result included in the xml files, sonar dashboard shows 0% coverage mostly. Additionally, Coverage Pane says there are ~3k unit tests while coverage is displayed as 0%

SonarQube Deployment : Helm
SonarQube Version : 10.2.1.78527
Sonar Scanner Version : 5.0.1.3006
Java Version : 17.0.7 Eclipse Adoptium (64-bit)
Code Language : C++

We are trying to achieve this instability problem. Could you please help us ?

Hi,

Every time we see this kind of behavior, it traces back to inconsistencies in the analysis.

Do you have analyses being submitted from multiple jobs? Multiple CIs? Does every job correctly generate coverage reports and pass them into analysis?

 
Ann

Hi Ann,

Thanks for the prompt response!

we have experienced at least one of the scenarios that you asked which might cause the inconsistency. For instance, sometimes jobs can generate missing coverage reports.

There are three questions come to my mind after your comments :slight_smile: I wanted to ask all of them in one message to not make pollution on this topic with many messages.

  1. it makes sense to have an impact on the new code but I couldn’t clearly get how overall code coverage is impacted from the previous analysis. because overall code coverage is also 0%.
  2. what is best-practise to avoid this inconsistency ?
  3. how we can recover the system to calculate and analysis coverage correctly?

Hi,

We try to keep it to one topic per thread. Otherwise it can get messy, fast. I’ll make a pass at these, but reserve the right to ask you to create new topics if you have followups. :slight_smile:

Analyses don’t build on each other. Each analysis is fresh and new, from whole cloth (insert additional aphorisms if needed :smiley:). So if analysis 1 says you have 100k LOC in Java, and analysis 2 says you have 10 LOC of JS, well… you must’ve refactored & now have a very small JS project. I suppose this is really a case of “computers are dumb; they do what you tell them to do”.

So if analysis 1 says you have 80% coverage and analysis 2 says you have 0% coverage… SonarQube believes you.

Lock down your pipeline. A failure to generate the coverage report should fail the pipeline.

After all, you’re presumably enforcing a coverage requirement in your Quality Gate (right? :innocent:). So no coverage report => 0% coverage => a failing Quality Gate => you can’t merge / promote / release. Right?

Catch the failure as early as possible and that will help get the root cause fixed.

I’m not sure what you’re asking here. You can’t go back and fix the old analyses. You can only go forward with good data.

 
HTH,
Ann

yeah, your answer quite expressive, thanks for the detailed explanation :slight_smile:

there is one case, I want to talk about.

in our Jenkins pipeline there is a stage for the code coverage calculation and there are a couple of jobs run in parallel as we have different levels of sw testing.

However, some of the jobs run everytime when pipeline is triggered but we moved some jobs to be able to run just for the nightly builds as they take too much time. what it means, after nightly build we have new generated code coverage’s in addition to daily routines. for instance, there are 2 test jobs input to Sonar for the regular(after each commit) pipeline runs, but after the nightly builds, there are 3 test coverage files are provided.

Does this approach cause some inconsistency ? what I understand from the comment below, this shouldn’t have an impact but just want to confirm that.

Hi,

Unless the third set of tests is entirely duplicative (in which case… why?) then yes, this is causing inconsistency in your coverage results.

I see a few different options here, none of them optimal:

  • Only run analysis at night, when you have all 3 sets of tests. Downside: longer lag-time on analysis => poorer developer experience & delays in knowing if your code is really shippable.
  • Re-use the previous night’s test report during the next day (details of storing and retrieving the reports TBD on your side). Downside: the re-used reports may not be fully accurate by the time they’re re-used.
  • Have two different jobs, one for the 2-report runs, one for the full, overnight run. Downside: if you’re in a commercial edition, this will bloat your license LOC usage.

 
HTH,
Ann

your comments are really insightful. we will be looking for the better solutions.

for me, this case can be considered as closed.

thanks for the quick responses :slight_smile:

1 Like