Why don't Unit Test failures show up in short term branches?

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
  • what are you trying to achieve
  • what have you tried so far to achieve this

I’m using SonarQube v7.9.2, Scanner v4.2.0.1873, and the SonarGo plugin v1.6.0 (build 719)

I want to see the Unit Tests, Unit Test Failures, Skipped, Duration, etc… on short term branches. I’ve confirmed that my Go report is being ingested properly because all of the Tests measures show up in my Main branch and any Long-lived branches. Short-lived branches (such as those that are created from a PR) are missing these measures though they run the same build and scan process.

I see a new feature request here (Unit Test Conditions on New Code Quality Gate), but I have not seen anything in any docs that state that these measures shouldn’t show up in short-lived branches or any explanation as to why they aren’t.

Is there any confirmation that this behavior is by design or explanation as to why this isn’t working today?

Hi,

In SonarQube 8.1 we dropped the Short-lived branch concept. Now they’re all just branches, with all the same measures. You might want to upgrade.

 
Ann

Yes I read about that and its on my todo list to do some testing with it, though I have some concerns about upgrading after reading this: Sonarqube 8.1 branch analysis, definition of new ?!

But since 7.9.2 is the LTS release I was still hoping to get an answer to my question as to whether this behavior is by design and intentional? Is this described any any of the online docs?

Hi,

Short-lived branches and PRs only report coverage and duplications, IIRC.

 
Ann

I have to say the more I look at this the more it seems like a bug in 7.9.2 to me. I discovered that even on a short lived branch if I go to a specific file where there are unit tests and click on the show measures on that file you see the unit test execution measures, they just aren’t showing up in the measures from the main screen. Here is the best screenshot I could grab of this:

Hi,

Again, short-lived branches have gone away in 8.1. You need to upgrade so you can get all your metrics on all your branches.

 
Ann

I am interested in upgrading but have not had a chance to investigate the impact of this issue in our environment: Sonarqube 8.1 branch analysis, definition of new ?!

In any case if 7.9.2 is LTS so shouldn’t reported bugs be addressed in 7.9?

This issue occured only in Sonarqube 8.1 with the redesign of the branching feature,
from short-lived / long-lived branches to a branch is a branch (with full dashboard and metrics).
So it’s no bug in the current 7.9.2 LTS version.
Unfortunately it keeps me from updating to Sonarqube 8.1

Hi,

We don’t consider this a bug; it’s working as designed.

 
Ann

that why i was asking for a reference to the docs indicating this was the intended behavior of short lived branches. it doesn’t make sense to me that the unit test success and failure info would be available in the short term branch (as depicted in the screenshot i offered) and then just arbitrarily not displayed in the branch measures and used in the quality gate. i’ve searched everywhere i can think of and i haven’t found anything stating that was intended behavior.

Hi,

There’s nothing in the docs about this. Our oversight, I guess. In case you’re interested in the history/why of that: when SLBs were first introduced they only reported issues. It was a Really Big Deal when we added Coverage and Duplications measures (and only those measures). That’s why I keep saying that if you want all measures, you need to upgrade.

 
Ann

Even after upgrading to v8.1 I still see the issue; however, instead of it being on short lived branches the issue now appears on Pull Requests. Here is a screenshot of the measures on a Pull Request from v8.1 where there were something like 40 unit tests run, but which do not show up in the measures on Pull Requests, so the issue just moved to the new structure, it didn’t get resolved. There’s still no obvious way to ensure code doesn’t get merged to master if Unit Tests are failing unless I am missing something.

Hi,

Have you reanalyzed since the upgrade? And are you sure you’re feeding that data?

 
Ann

Yes, 100%. If I scan the code using “branch mode” passing it the following then yes I see the Test metrics just like I would have in v7.9 using long term branches.

sonar.branch.name=feature/SAS-760

If I scan the code using the “pull request” mode; however, passing it the following then it looks just like it did in the v7.9 short term branches in the screenshot above where I see coverage information on new code, but no measures on the Unit Test successes/failures/runtime.

sonar.pullrequest.branch=feature/SAS-760
sonar.pullrequest.key=18

The screenshot above was taken from a PR scan that was triggered this afternoon, whereas the upgrade to v8.1 was done on Saturday night.

Hi,

It’s expected that you’re still not going to see those metrics “in PR mode”. I advised upgrade because the concern was with short-lived branches.

 
Ann

Right so short lived branches basically works like PRs and long lived branches now works like “branches”.

At the end of the day; however, regardless of what you want to call things, if a software engineer submits a pull request, the tests run, and one or more of the tests fail, would you want to allow that PR to be merged to master? So how would you make a rule to prevent that? What difference does it make what the code coverage is if the tests aren’t even successful?

Hi,

There are no short lived branches anymore. There are branches and PRs. The way they work hasn’t changed.

If the tests fail, then why is your CI/CD advancing to analysis? If you want to block everything on a failing analysis, then fail your job.

 
Ann

B/c the analysis tool gives the software engineer a single place to look to review what didn’t pass the code quality gate. The Unit Tests failed measure will show the software engineer specifically which tests failed during the build inside the analysis tool. If not inside Sonar then the software engineer must also review the log data on the failed build to see that information, which doesn’t make sense if a code analysis tool has been implemented. It also gives the management team who oversee the software engineers a single place to establish the rules by which code is allowed to be merged. Those are the reasons it is preferable to capture the unit test failures in analysis.

More to the point; however, there is a measure for Unit Test Failures and that measure is configurable in the Quality Gate. This appears to clearly establish the intent to prevent code with unit test failures from passing the quality gate and being merged to master. Isn’t that the intent of having that measure available for configuration in the quality gate?

Hi,

IMO a unit test failure is a mechanical failure. Do you expect SonarQube to report to the software engineer that his code didn’t compile? No. The job fails long before you get to that point because you’ve encountered what I’ll call a mechanical prerequisite to quality. Ditto unit test success.

Fair point. We’ve tried twice actually to drop unit test execution metrics and eventually caved both times and brought them back. There was also an initiative last year(? my memory is fuzzy) to clean up what you can put in a Quality Gate and once again we went with a minimal approach just to avoid upsetting peoples’ apple carts. But, just because you can doesn’t mean we think it’s a good idea.

One more point: I’m not sure you’ve noticed but the Quality Gate conditions enforced on PRs are the subset of the gate’s conditions that are “on New Code”. There are no “on New Code” test execution metrics.

 
HTH,
Ann

100% agree that there are I would consider a compilation failure a mechanical one. The reason I consider Unit Test Failure differently is b/c we’ve seen engineers write tests that run successfully on their PC, but fail in the build environment. (For example by referencing statically assigned resources from the Dev environment that were inaccessible from the Build environment.) This points to a poorly written test to be sure, but since tests can fail for any number of reasons, it is easier to manage the process when we can point the engineer to the quality gate to review everything that is failing in one place. It is sub-optimal and counter intuitive (IMO) to look at a PR in SCM and see a big green SUCCESS box next to the quality gate and then discover somewhere else that a test is failing and have to look somewhere else still to find out what.

Side note: We do fail the build if tests fail, but we do it at the end of the job after analysis is complete so that we still get analysis info. In our experience it is commonplace that this has to be called to the engineer’s attention b/c their focus is drawn primarily to the SUCCESS on the quality gate, which is a focus we try to encourage.

This is an interesting distinction that I have not considered. We had to rollback the 8.1 upgrade last night due to our inability to use the PR functionality without some additional changes to our build process (unrelated to unit test metrics) in combination with challenges properly capturing “new code” in branch mode on 8.1. Once we’ve made changes to our build process to capture PR IDs and can reconsider the 8.1 upgrade we’ll have to factor this in to which method will make the most sense for us. So thanks for the heads up.

1 Like