We are running a pipeline at gitlab for a NET framework project. We use 3 different runners with 3 executors each, they can pick any job. Some test jobs can be run in parallel.
Build: Compile and Archive as artifacts result of compilation as pdb’s, dll’s… for later reuse at test jobs
Test (Unit x 7, Component x 3, Integration, System): Using dotcover command line with Nunit test runner creating dcvr coverage file for each test project
Sonar: Merge dcvr files coming from different test jobs and generate html coverage report, invoke sonar begin, re-compile, invoke sonar end
Our finding is that coverage is only properly gathered by Sonar when it uses the same working directory and runner that was used at build stage, otherwise it will show no coverage.
We are running a pipeline at gitlab for a NET framework project. We use 3 different runners with 3 executors each, they can pick any job. Some test jobs can be run in parallel.
Build: Compile and Archive as artifacts result of compilation as pdb’s, dll’s… for later reuse at test jobs
Test (Unit x 7, Component x 3, Integration, System): Using dotcover command line with Nunit test runner creating dcvr coverage file for each test project
Sonar: Merge dcvr files coming from different test jobs and generate html coverage report, invoke sonar begin, re-compile, invoke sonar end
Find it as a limitation as sonar job would need to be run not only by the same runner but also executor that executed build stage for that given pipeline which is not straightforward at gitlab and prone to become a bottleneck when several pipelines run in parallel.
You can run the code coverage from different agents, as long as the paths inside the code coverage reports are all relative to the scan working directory.
We run code coverage for two different target frameworks (line 158), upload the coverage reports (line 185), and then during the analysis step (line 208) we download the coverage reports (line 281) and use them in the Sonar analysis.