Unit tests results are overriden

Must-share information (formatted with Markdown):

  • Sonarqube Community Edition v10.7 (95233), SonarScanner 5.0.1.3006
  • Docker
  • Trying to have all unit tests reported in Sonarqube

I’m trying to import different unit tests to Sonarqube. I have basically two different kinds of unit tests, ones that “links” to a file and then ones that doesn’t actually “link” to any file. Problem is that if add all of them to the analysis the ones that links to a file “overrides” the ones that doesn’t actually link to any file. If I add the tests that doesn’t link to any file, they will show fine in the SQ UI but with both enabled only the linked ones are shown.

EDIT:
Example:
sonar.junit.reportPaths → links to a resource

sonar.python.xunit.reportPath= → doesn’t link to a resource
(sonar.python.xunit.skipDetails=true enabled)

Without junit shows those python xunit results in the UI but with junit enabled doesn’t show them. Only if those python xunit results links to a resource it will show both.

Hi,

Welcome to the community!

Could you double-check your SonarQube version? 10.7 hasn’t been released yet. :sweat_smile:

Could you also expand on what you mean by “doesn’t link to a resource”?

This is a little bit of apples and oranges. Which language(s) is your project in?

 
Thx,
Ann

1 Like

Hi!

I mean by the “link to a resource” that in the SQ UI (measures/tests/Unit Tests) it shows the number of tests per file e.g.

And when it doesn’t “link to a resource” it look like this in the UI:

When I add the these 2 types of unit tests to a single analysis, only the ones that “link to a resource” will show up in the UI (189). But I want 189 + 1101 in the UI. (Mainly want to see the possibly failed as a number for quality gate)

We have multi language project (C++, Python, Java).

Hi,

Your screenshots are of a directory/file-based presentation. I’m guessing we’re looking at the Measures page / Unit Tests here?

The page is designed to help you understand how the measures relate to the project structure. E.G. How much complexity in this module? Which directories / files aren’t covered by tests? And so on.

You’re not seeing the count here because there’s nothing to “hang” it from.

 
HTH,
Ann

Yes i know that but the problem is that I can not see the “unlinked” unit tests anywhere if I add tests that “links” to the analysis. I would want to see the total of unit tests somewhere (especially failed/error tests):

The problem is that those +1000 unit test metrics go missing if I add tests that actually “links” to a file.

Let me try to explain it more clearly:

I send Java unit tests to Sonarqube (Ones that point to an actual file through the XML file = “links” to a resource) with sonar.junit.reportPaths.

It will look like this in the UI:

Then I wanted also send Python unit test results to Sonarqube (they do NOT “link” to a file inside the XML file) with sonar.python.xunit.reportPath and sonar.python.xunit.skipDetails=true.

It will look like this in the UI (Without Java unit tests enabled):

But if I have both enabled (Java unit tests and Python unit tests) in the same analysis it will look like this in the UI:

→ Python unit test results go missing.

I tested manually adding the classname to the Python unit test XML file (points to actual file) and then it showed up in the UI together with the Java unit tests.

Hi,

Thanks for the detail. I think I understand now.

With Python alone, and, per the docs

sonar.python.xunit.skipDetails=true to collect only project-level details.

You get those project-level details.

But with a mix of project-level and file-level (from Java), the project-level numbers are discarded in the math.

I’m going to flag this for the team.

 
Ann

1 Like

This one seams to be similar:

Hi @jola and @andi

This is a known limitation. We originally intended to have only test metrics on test files, and then those metrics would be aggregated at the project level.
For example, if you have 5 tests on fileA, and 5 tests on fileB, we will aggregate and store a metric of 10 tests on the project.

However, we also started to support test frameworks that cannot link a test to a specific file. This was implemented by storing the test metrics directly at the project level.

This means the two ways conflict. You can use one or the other.

We could possibly find a cleaner implementation, but I prefer to be transparent, it is very unlikely we will work on this topic in the short term.

My advice is to choose the most valuable test report and use only it.

1 Like