Trying to maximally exploit Code Coverage features
I’ve only recently started utilizing my company’s enterprise instance of SonarQube Server, plugging it into my Jenkins pipelines to analyze my .Net Framework projects.
I haven’t yet had a chance to dig much into the SonarQube analysis, as I’ve been in “check the box” mode.
Most of my struggle thus far has been trying to get the right tooling in place just to gather the necessary reports to pass to SQ for test results and code coverage. With that, in my latest iteration, I’ve been dealing with parsing errors in my generated (by NCrunch) NUnit and OpenCover reports.
As I’ve been struggling with this, I’ve come across the “Generic test data” help page. After poring over my generated reports, and seeing how verbose they are, I’m struck by how simple the generic report data that you support is.
My question in this regard is: does this limited amount of data satisfy all the needs for the analysis that SonarQube does in regard to test results and code coverage? To put it another way: are you only parsing out from the NUnit and OpenCover reports this limited subset of data, such that generating these reports themselves is actually superfluous, given I have all the required data in another form?
But does your OpenCover (or any other) parser attempt to extract any more than just the minimum?
I’m trying to gauge whether or not there’s any value in wrestling with my tooling to generate the more comprehensive reports.
My assumption, from looking at the generic data format, is that that’s all the data you use for the code coverage analysis; and also that, if there were more that you used, it would be available to set as additional optional data.
I should note that the purpose of my PR is to try to optimize my test and coverage process. So my master branch and my PR are both using (NUnit) test reports and (OpenCover) coverage reports that were generated using different methods. Due to this, I’ve had to transform reports from the PR method to try to match what was originally reported in ‘master’. I have seen errors in SonarQube parsing my test report, of the following:
I understand why the above is happening. I’m just curious as to the consequences.
Ultimately, my original question was to try to discern whether my path of least resistance would be to transform to your generic formats. I’d like everything to be correct, of course, but I’d also like to be able to report as much information as possible.
Is what I’m seeing above for the PR correct? Should that Coverage → Tests section only be showing up under the ‘master’ analysis, or am I doing something wrong that nothing is displaying under the PR? I don’t want to merge this work into ‘master’ if that information is going to disappear due to something that I’ve missed.
SonarQube’s PR analysis focuses on highlighting issues found in New Code. This means that details about tests—like test counts or pass/fail rate, aren’t displayed directly in the PR analysis view.
If you want to confirm that the work you’re doing on test/coverage processing helps, and not hurts, you can always analyze as a branch instead of a PR. Test execution metrics are available in branches.