We’ve recently implemented SonarCloud into our DevOps pipeline. It runs successfully and publishes the results to SonarCloud all fine. The only issue that we have is that it’s taken our application build time from 5 minutes to around 40.
I’ve followed the guidelines given on a forum on here on how to diagnose these issues, the slowest part of the process I found from the logs was the Typescript/JavaScript analysis taking around 4 minutes. But I don’t want to exclude this analysis from the tests.
Our project is around 250k lines of code, is this length of time to run the analysis expected? Or should it be running a lot quicker?
Any help or suggestions on how to improve this would be very much appreciated.
A good first step is to parse out all the steps in your analysis that might be taking time. Maybe you’ve already done this, as you mentioned TS/JS analysis took 4 minutes, but that certainly doesn’t explain 40 minutes!
Would be great if you could share the results here. It won’t have any file names, just sensor names.
I was just thinking when I was getting this information that the analysis doesn’t actually take too long, it’s the build that has seen the increase in time:
Is there any metrics I can grab from the build that would be of any use?
The .NET analysis primarily occurs during the build phase. If you’re experiencing unusually long analysis times, it’s worth investigating further. We have a comprehensive guide on this topic:
Have you checked out this guide? It can help you identify which projects or rules are contributing most to the analysis duration. Is one standing out from the rest?
As a general rule, the analysis shouldn’t take longer than the build itself, so if it does, it’s a sign there might be a performance issue worth addressing.
Apologies for delayed reply, I had to change over the pipeline to a self-hosted agent as it was taking too long to run with the diagnostics turned on.
Is there any other way to get the diagnostic information out of the build, I let it run for two days after I included the -p:reportanalyzer=true -v:diag and it still didn’t complete.
2 days sounds really weird, as it should only produce more logs.
Instead of -v:diag, can you try to publish a binlog and either share the file (we can provide a private way to share it), or inspect it with MSBuild Structured LogViewer to find the analysis times?
The logviewer describes what parameter you need, and how to customize the output binlog file path if needed.
I’ve extracted the binlog from the build and that is showing that the build isn’t actually taking that long, although the build step in our pipeline is. I’ve never read these so happy to send it over if you can provide the way to share it.
The other big difference I can see from running our build with and without sonarcloud is the size of the logs from the build in Azure DevOps. Not sure if that has an effect on anything but the lines in the logs go from around 8k to around 200k.
I took a look at the binlog, and there’s something off with it. The binlog contains logs for only one of your projects (Cxxxxx.Wxxxxt.Persistence.csproj) and doesn’t contain the analyzer data.
Do you have -p:reportanalyzer=true together with your binlog parameters?
Given the fact that the binlog contains content from only one (possibly the last one) project, and things we’ve seen in the past, I’ll make a guess: Could it be the case that your pipeline looks like this?
Scanner for .NET begin step (a.k.a. SonarQube Prepare step in Azure DevOps)
Build csproj 1 with dedicated command
Build csproj 2 with dedicated command
Build csproj 3 with dedicated command
Build csproj … with dedicated command and so on
Build csproj N with dedicated command and so on
Scanner for .NET end step (Run Code Analysis from your screenshot)
That would be an unsupported scenario. It is expected to have only a single SLN file built between the begin and end step. What is happening in such case is that MsBuild, after reach build, detects that every already built projects has new modifications (analysis results), hence it re-builds it again. Including all transitive reference projects. And this causes the projects to be built exponentially-times.
This is just a long-shot guess, as we don’t have the full logs.
Thanks Pavel. This actually led me to resolving our issues. In the build process of our pipeline we were using the wildcard **/*.csproj to build all of our projects in our code. This, I believe, then led to a situation like you mentioned above. Pointing the build to the singular .sln file brought the build time back down from over an hour to 5/6 minutes. Thanks again