which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
SQ version: Version 10.2.1.78527
SonarScanner 5.0.1.3006
build-wrapper: version 6.48.1 (win-x86-64), version 6.49 (linux-x86)
how is SonarQube deployed: zip, Docker, Helm
Deployed locally
what are you trying to achieve
Make sense of unit test metrics
what have you tried so far to achieve this
Adding unit tests via cppunit and looking at the history
Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!
After import 100+ test cases in cppunit format, i see the passes and fails, but this trend chart for Unit Test Success (%) has a yellow triangle and says “This metric has no historical data to display”
Is the Unit Test Success % not something calculated? If not, where should that success rate come from? I don’t see anything in cppunit specification that allows for passing that data along.
This is probably a dumb question, but have you been passing test execution reports into analysis all along, or did that just start?
Also, tangentially, it looks like you’re passing a commit SHA in as your analysis sonar.projectVersion? If so,… that’s not the best idea.
Every time the sonar.projectVersion string changes, we set an Event on the analysis, which excepts it from housekeeping. And of course, if you’re using a ‘previous version’ New Code definition, then it’s messing up your Quality Gate calculation too.
Yes I’ve been working on getting VectorCAST Unit Testing results into SonarQube for the weeks I’ve been using SQ We have a joint customer who is expecting this, so we do the work and ask the questions.
? So should the Unit Test Success (%) be calculated or passed in? If passed in, how?
I’ll update the projectVersion to not use git rev-parse HEAD. I was just sad the see the continued Version Not provided message. I was trying to just solve that. YOu have any suggestions on that?
I believe Unit Test Success % will be calculated based on your test execution reports. TBH, I’ve never looked closely at this area, but I think test execution reports reflect which tests passed/failed and we do the math.
So assuming you’ve passed in execution reports all this time, you should have a metric history, like so.
If not, we need to back up further and look at your analysis log.
The analysis / scanner log is what’s output from the analysis command. Hopefully, the log you provide - redacted as necessary - will include that command as well.