Ehm… I’m not sure you can anymore. We stripped some of that execution data out several years ago. If it’s still available anywhere, I’d start in the Measures page, with the Cvoerage → Tests → Errors listings.
My team and I are working with Azure DevOps CICD pipelines for a C++ project and it would have been really helpful to see Unit Tests results in Sonarqube.
We do have errors and success metrics but for more details (which ones, reasons, etc) we are forced to read pipelines logs, as we have a lot of small repositories difficult to have them both in local.
We are also wondering why we are not able to see details neither, as it was possible in version 6.7 (at least this one).
We were expecting to find some details like shown on screenshot of documentation Seeing Tests - SonarQube-6.7 (end of page):
SonarQube 6.7 was a long time ago. A lot has changed since then.
I’ll be honest & say that as an organization we’ve never really believed in those test metrics. Why? Because we feel that if you’ve got failing tests, the pipeline should stop then & there. You shouldn’t just count the failures & move on. That’s why we’ve slowly but surely been moving away from these metrics.
Why? Because we feel that if you’ve got failing tests, the pipeline should stop then & there. You shouldn’t just count the failures & move on. That’s why we’ve slowly but surely been moving away from these metrics.
We could debate about this for a while, but i will try not to.
I just want to say that this statement may not be this straight for everyone (and maybe not us here).
I worked on some project back then with clients asking for features evolution perfectly knowing that it would break algorithm for other cases (letting time to properly fix/change those cases with its users). Keeping those errors help us all to solve this case with debates and design.
And if i may be a bit more incisive following your statement, why keeping errors metrics at all ? If we expect to have only “success” in SQ by stopping pipelines before, there is no need to these metrics
(i mean “numbers” metrics, counting “errors”, “skipped”, percentage of success, etc)
they’re still supported (analysis doesn’t error out), but fair point. I’ll flag this for the docs team.
Yes, please.
Because i understand that there are steal supported, but there is no mention that they are not used or not visible.
And with no indication, we could naturally expect then to show up. Adding that they were before, so that is a level more confusing.
I’ll be honest & say that as an organization we’ve never really believed in those test metrics. Why? Because we feel that if you’ve got failing tests, the pipeline should stop then & there. You shouldn’t just count the failures & move on. That’s why we’ve slowly but surely been moving away from these metrics.
In our case, sometimes the tests only fail in the CI and is very hard to reproduce locally. We want to store the tests result in order to troubleshot them.