I am trying to run test coverage as part of a PR check and is slow, average time is 25 min.
We are using:
SonarQube Enterprise Edition Version 9.9 (build 65466)
‘dotnet-sonarscanner’ (version ‘6.2.0’)
code is C#
If I run:
dotnet build ProjA
dotnet build ProjB
dotnet build ProjC
dotnet sonarscanner begin
dotnet test TestProjA
dotnet test TestProjB
dotnet test TestProjC
dotnet test TestProjD
dotnet sonarscanner end
The build part takes under 3 min, the test part takes around 5 min, sonarscanner end another 5, and with the other things I have to install like dotnet, java and so on it takes a total of around 15 min. But the problem with this approach is that not all lines are reported to Sonar so coverage is wrong. I should have around 190k lines reported but this way I only have 113k.
dotnet test TestProjA
dotnet test TestProjB
dotnet test TestProjC
dotnet test TestProjD
dotnet sonarscanner end
The build part takes over 7 min, the test part takes around 5-6 min, sonarscanner end another 6, in the end it takes a total of around 23 min. Even the previous 15 min is long, 23 min is unsustainable as code will be merged in the mean time and the the branch needs to be updated and all checks start again.
Also, running the test part outside of the sonarscanner only takes 3.5b min.
It’s not shocking that you get a strange linecount if you begin after the build is complete. In fact, it’s surprising that you’re not complaining of more than just a strange linecount. begin is supposed to run at… the beginning because it initiates analysis, which actually runs during the build.
That said, I’m not sure you’re going to get what you expect either from this:
I think (and my memory is fuzzy on this and I’m not finding it explicitly in the docs) that you’re going to be better off with this:
dotnet sonarscanner begin
dotnet build ProjA
dotnet test TestProjA
dotnet sonarscanner end
dotnet sonarscanner begin
dotnet build ProjB
dotnet test TestProjB
dotnet sonarscanner end
::&etc
That is more work overall, I suppose, but will give you faster results for ProjA, at least. Without TestProjD I would lean in hard to the recommendation to split this out, and further recommend that you parallelize it on multiple build agents for much faster results across the board. Maybe that’s still an option?
The issue is that I can’t run them that way. ProjB depends on Proj A, and ProjC depends on both ProjA and ProjB. Then the test projects test different functionalities on the Projects A to C. So I need to build all projects in the dependency order then run the tests.
Unfortunately, I doubt there’s anything you can do to speed this up.
For other languages, excluding parts of the code from analysis might help, but .NET still analyzes everything and uses exclusions to filter what’s reported.
On a side note, all of your build and test operations take part in the same directory, right?
dotnet sonarscanner begin
dotnet build solution.sln
dotnet test solution.sln --no-build
dotnet sonarscanner end
This might shave off a bit if some rebuild happens during the test phase.
You might also want to experiment with doing just
dotnet sonarscanner begin
dotnet test solution.sln
dotnet sonarscanner end
Normally, dotnet test builds your solution before testing. This might optimize things a bit, because testing should not be impacted by our analysis, so I believe there is something to be gained here. Note that, in that case, you do NOT want the --no-build flag.
That being said, as Ann mentioned, there is a cost to everything. The analysis runs during the build so increased time for the build phase is to be expected. Gathering coverage during tests also has a non-trivial impact on running the tests, unfortunately (but that is out of our control and specific to your collector).
Also, you mention having to install java, dotnet etc. What type of CI are you using? Are the agents hosted or on-premises? If they are hosted, is there not an image that already has the required dependencies? If it is self-hosted, can you preinstall those things so you do not have to do it every single time?
I created a solution with all the projects and tried both of your suggestions, none improved the time. It wasn’t worse either but not better. We are using a windows-latest runner for scalability. I tried using the caching for a separate workflow (our integration tests) which is run on a self-hosted runner and I don’t think it works. I will open a separate ticket for that. For now I just merged the workflow and didn’t make it required, hoping things will improve in the future.
Unfortunately, there will not be huge improvements regarding the time of analysis. As @ganncamp and @denis.troller already mentioned, the analysis has a cost, and your build/analysis time is not surprising to me.
You mentioned trying caching (I am assuming the build output). If you manage to make it work, you could enable concurrent analysis:
a first stage to build your application without the analysis
two dependent stages: one to run your integration tests and another to run the analysis and test coverage
The dependent stages would run in parallel.
The only downside with this approach would be that you will need to build twice.
You also mentioned having a self-hosted agent; if you can use it for all your stages, it could also be a solution to speed up the analysis.
In my experience, the provided agents are slower than our self-hosted agents.