What does sonar do with unit test reports?

I am currently onboarding several C++ projects to sonar.

  • use cppunit for testing.
  • have azure pipelines triggered for pull requests
  • have a build script allowing sonar analysis to be run from the command line as well
  • have coverage data which is successfully imported into sonar

I have been experimenting with importing unit tests results into sonar.
The appropriate property seems to be:


Sonar clearly parses the reports as it complains if they don’t exist or are not in cppunit format.
On ‘success’ it reports:

INFO: Sensor cppunit [cpp] (done) | time=17ms

However I can see nothing related to them on the dashboard.

What does or can sonar do with unit test reports?

In the project overview the measure tab has:

  • Reliability
  • Security
  • Security Review
  • Maintainability
  • Coverage
  • Duplications
  • Size
  • Issues

but nothing for tests. Should it have something?

For azure I use XLST to convert them to JUnit format and they are listed in a tab on the build.

The only documentation I can find is Test Coverage & Execution | SonarQube Docs

This is focused on the arguably much more important topic of coverage data.

I note there are parameters per language. For example sonar.junit.reportPaths for java.
It is not clear from the documentation whether it would be wrong to use these parameter to import test results generated in the correct format but for a different target language.
For example. I am generating junit reports because Azure accepts them but not cppunit format. Would it be correct to use this parameter in my case or only for a java project?
I tried it and it also has no visible affect.

I also found this question:

Number of unit tests is zero

Where a java project, presumably using junit, expect test counts to be displayed alongside the coverage data. I do not even have a “-” for unit test counts. Does this have to be configured somewhere?

Hey there.

Very little – to the point that we are heavily weighing dropping this feature entirely. We report the # of the unit tests, and for some languages, the pass/fail/skipped count. For the most part, we really don’t think SonarQube analyses should be run unless all tests are passing.

One common mistake is that Test Execution metrics are only visible on long-lived branches (including the main branch). They are not available on short-lived branches or pull requests where the focus is on New Code. Could that be what’s happening here?

I guess it depends on the role of sonar.
I have visibility of the list of tests in Azure. This might not be the case with other platforms. But sonar possibly doesn’t want or need to compete in that space.

I thought perhaps there could be some interesting metrics. For example:

Test Density

  • If you have 80% coverage but only 1 test you would likely be in a ‘lower quality state’ than with 80% coverage and 1000 tests.
    However, I’m not sure where you stand relatively when coverage is higher but with less tests

To really drill down usefully though you would need to know coverage provided by individual tests which is not something so typically or easily measured.
It would be nice to be able to visualise which tests test which lines of code or visa versa.
I think the inputs to sonar are independent so you would not currently be able to make use of such data.

I don’t know if these kinds of metric are useful but I like the idea that with the data imported we could automatically benefit if any are added in the future. Perhaps that is almost a YAGNI by definition?

“One common mistake is that Test Execution metrics are only visible on long-lived branches (including the main branch). They are not available on short-lived branches or pull requests where the focus is on New Code. Could that be what’s happening here?”

That doesn’t appear to be the case but it is possible as I am onboarding projects my master branches are not being analysed yet though the long lived development branch is. We are using a model where features merge to the development branch, releases merge onto master (by merging the development branch).
I am trying to work out what the correct sonar analysis to merge onto the master branch is
(the pipeline and sonar commands are version controlled with the codebase rather than external to them)

Could you answer some questions about what happens if I use either or both of:


Is there a ‘preferred’ format? (e.g. due to information in the format or quality of the parser)
Is it safe to use sonar.junit.reportPaths for languages other than java?
(in which case the documentation should be updated).

What should happen if I attempt to import the same data both ways?
What actually happens? (will it be duplicated?)

Not all my tests are cppunit but they are driven from ctest.
I don’t think there is an importer for ctest XML output but I can convert it to junit format using XSL (see CTest XML to JUnit XML · GitHub) or upgrade and use the new --output-junit option (see unit testing - CMake CTest output to JUnit XML - Stack Overflow ).
ctest does not know about invididual unit tests so there is a case for importing both sets of data at once. At least if there is a use for it.

Hey Bruce.

sonar.cfamily.cppunit.reportsPath and sonar.junit.reportPaths are supported for C/C++/Obj-C and JVM-based languages (Java, Kotlin) respectively. They are not generically supported across all languages.

I see from this question - SonarQube Coverage per Test Information with ANT - #9 by dhawal - that reporting which tests cover which lines was deprecated and a dropped (at least for some java configurations) -