Wrong code coverage report (based on opencover format)

  • ALM used: Azure DevOps
  • CI system used: Azure DevOps
  • Scanner command used when applicable (private details masked):
 - task: SonarCloudPrepare@1
    displayName: 'Prepare analysis on SonarCloud'
      SonarCloud: '$(sonarcloud)'
      organization: '$(organization)'
      scannerMode: 'MSBuild'
      projectKey: '$(sonarcloud_project_key_name)'
      projectName: '$(project_name)'
      extraProperties: |
  • Languages of the repository: C#
  • Error observed: The opencover report shows 100% line and branch coverage, but SonarCloud shows less.
  • Potential workaround: I have changed the format from opencover (using Coverlet) to coverage (using dotnet-coverage)

Salut @mariusstanescu .

I apologize for the long time in replying to this topic.

Is there any way you could provide us a small reproducer project for this behavior? This would really help the investigation.

No, not really, especially because I think sometimes the report was OK, but other times it wasn’t (with no changes in the pipeline/scanner command). I don’t know what caused the difference in the analysis/report, as the only thing that changed was a few lines of code, for example. I also don’t have the time to investigate this further.

I understand.

One things that I suggest is turning on debug logging on your CI. We have this sonar.verbose=true in our own pipeline and it doesn’t affect much the performance.

Also, we suggest to upload the code coverageas an artefact to the pipeline.

When you see this manifesting, you can download the code coverage and also the debug logs from the end step (- task: SonarCloudAnalyze@) , because in that step the coverage reports are parsed and pushed to the server. We need DEBUG logs from that step.

Then, you can share privately on this forum the logs and the ccov file, we can inspect it and investigate the issue. Ping here to share the logs, and we can send you a private message.

It might be a problem on how we aggregate the reports.


Do you have multiple reports?

Do you build and run unit tests for multiple target frameworks? That might lead to different code coverage reports depending on the target framework, and when aggregating, we may end up with such confusing statistics.

Any additional information will help us narrow down the problem in order to investigate and ultimately improve our products, and your experience.

Do I understand well that this is a working mitigation?

Yes, that’s the change we did in order to be unblocked.

I’d need a bit more info to be able to reproduce this on our side.

  • Do your scripts create multiple reports which get passed to the Sonar Scanner?
  • Do you build and run unit tests for multiple target frameworks?


Thanks for trying to help.

  • Yes. We create multiple reports (we have multiple unit tests projects, and one integration tests project, all of them generating reports)
  • No. The target framework is the same for all of them (.NET 6)

For visibility, I’ve created an issue and added a comment to test your scenario and see if we can reproduce the problem. Add IT with comma-separated coverage paths · Issue #6973 · SonarSource/sonar-dotnet · GitHub

1 Like