We are using SonarCloud in AzureDevops.
Our product is in C#, around 149 projects, for a total of 728K lines.
We run the SonarCloud analysis for PRs and for nightly builds of our master branch.
We are running payed for agents hosted by Microsoft (currently Windows-2022 image).
We recently got problems with the analysis failing.
First it failed with an error from Java about a too small heap.
This was fixed by specifying SONAR_SCANNEROPS.
We had to tweak a bit the desired value for the -Xmx option, but got success with 4GB.
Also we got errors that indicated we should set the ReservedCodeCacheSize option, so we did.
Currently we use these settings in our pipeline YAML: - name: SONAR_SCANNER_OPTS
value: “-Xmx4096m -XX:ReservedCodeCacheSize=128m”
This worked for the PR, but for the master build it now fails with the following: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007e0400000, 106954752, 0) failed; error=‘The paging file is too small for this operation to complete’ (DOS error/errno=1455)
So it appears the page file is in fact too small and cannot grow.
I find it very strange that SonarCloud is advertised to work on AzureDevOps, but then seems to show such problems. The VMs have 7GB RAM, 14 GB disk. These limitations are known.
Apparently SonarCloud is not fit to run on stock hosts once a certain SLOC threshold is passed ?
Suggestion on how to tweak the settings so to still obtain a succesfull analysis run ?
This is in one particular SonarCloud project where we use Sonar for analysis.
All projects are in our solution are C#, but some are Web projects that contain some JavaScript
There is a total of 149 (C#) projects
Around 18200 C# files, total Lines of Code is 728K
Around 270 JavaScript files, total Lines of Code probably around 270K (100 lines per file average).
Do I correctly understand that your analysis is trying to deal with nearly a million (~998k) lines of code? If so, I’m proud that it got as far as it did. Yes, you’ll need a beefier build agent, regardless of which DevOps Platform - AzureDev Ops or anything else - you’re working in.
My first thought is to split this up into at least two analyses / SonarCloud “projects” (yes, the way Sonar uses the word “project” versus the way Microsoft uses it is confusing ). My first try would put all the C# code in one analysis / SC project, and the JavaScript in another.
Indeed, we are analyzing about 1M LOC in our integration with SonarCloud.
But, it was already known by SoanrCloud when we’ve opened subscription that allows to process up to 1M LOC. In SonarCloud marketing materials I see “Seamless integration with cloud DevOps Platform”.
While in the documentation I could not find information that analysis in Azure DevOps is limited in number of LOCs. I definitely understand that the more LOCs we have the slower it will be performed, but crash is not what we expected.
Moreover, in worked quite some time, and then it started crashing without any noticeable change in the code or pipeline.
Let’s be clear: it’s not. Put a beefier build agent under this and it will work just fine.
Okay… that could be due to improvements in our detection requiring more resources during the analysis. Are you able to pinpoint when it started failing?