Background task example: AXUiDNT1oV2Q5xcOwqiF, yet this completed in 37s. Just to re-iterate in case I wasn’t clear before, the Prepare Analysis and Run Code Analysis are not my concern, instead my concern is that the MSBuild task is 10x slower when the SonarCloud is enabled.
And to be noted here just to make it clear : We are installing targets file during the Prepare step to hook up into the build step, so there’s definitely an overhead to expect, though 10x time slower is not.
We have 190 Projects right now in the solution and our build time increased as number of projects getting increased I’m suspecting(See below attached image)
was 250k +/- 6 months ago, now is 2250k +/- now
the SonarCloud scanner scans every project = 190 analysis to perform
each analysis takes 1 minute to scan the code which is quite fast (but can be improved if the scope is reduced to the minimum) = 190 minutes of code analysis. Even with a 30 seconds step, the build time will be multiplied by 8
To validate above theory I just change my build step to contain only one more project which has no reference to other project and it just took 32s
Having said that what would you guys recommend and what should be minimum accepted time for scanning a project
Hi @alexvaccaro. It’s not clear to me from this thread if you are in the same team with @Schumi or have a different topic altogether.
RE: msbuild, like @mickaelcaro said, most of our analysis runs during msbuild.
The analysis is done in two steps:
during the build - our native Roslyn analyzers are running for most of the rules (~250 in the default QP); also, UCFG files are created for the vulnerability injection analysis which is done in the SonarCloudAnalyze step
during the SonarCloudAnalyze step - our security engine runs over the UCFG files to find vulnerabilities - we currently have 12 rules that detect injection vulnerabilities
The msbuild debug logs don’t help that much, we’d need the msbuild /v:d /p:reportanalyzer=true > build.log logs. At the end of each project compilation, it will print out a list with each of our rules and how much time it took. Like this we can detect outliers, and also maybe projects that are outliers. We can then advise based on this info.
If you want, we can continue this topic in a private thread , like we did with @Schumi. Or we can continue here.
Hi @Andrei_Epure , thank you for getting back to me, I am not with Schumi.
I have been busy playing around with Azure VMs options for my build agents and I have found some gains in performance and I will be testing this in the coming days, but I will try your suggestion as well.
I would prefer a private thread so I can share my logs more freely.
Keep to one subject per topic
Most of the time, your question will deserve a new topic, containing information specific to your situation. If many topics should be consolidated into one, we’ll take care of it! In addition…
Do not bump old posts and try to tag-on with a possibly related issue.
The analysis can behave differently for each project in terms of performance. This is why we need each problem to be on its own topic. We have a backlog of performance improvements to do (and we are doing them regularly). We are welcoming all feedback in order to be able to prioritize what’s most impactful, and to find unknown issues.
I am closing this topic as we’ve addressed the issues reported in it.
Please open new topics, and we can address your specific problems and provide tailored suggestions .
For users of SonarCloud and SonarQube 9.1+ you now have the ability to run the analysis in concurrent execution mode. This can significantly reduce your analysis time. Just set the following environment variable in the begin step:
We would love to hear if you have tried this, if you have please could you post a comment on this post and tell us how you got on?