We are using SonarCloud in AzureDevops.
Our product is in C#, around 149 projects, for a total of 728K lines.
We run the SonarCloud analysis for PRs and for nightly builds of our master branch.
We are running payed for agents hosted by Microsoft (currently Windows-2022 image).
We recently got problems with the analysis failing.
First it failed with an error from Java about a too small heap.
This was fixed by specifying SONAR_SCANNEROPS.
We had to tweak a bit the desired value for the -Xmx option, but got success with 4GB.
Also we got errors that indicated we should set the ReservedCodeCacheSize option, so we did.
Currently we use these settings in our pipeline YAML:
- name: SONAR_SCANNER_OPTS
value: “-Xmx4096m -XX:ReservedCodeCacheSize=128m”
This worked for the PR, but for the master build it now fails with the following:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007e0400000, 106954752, 0) failed; error=‘The paging file is too small for this operation to complete’ (DOS error/errno=1455)
So it appears the page file is in fact too small and cannot grow.
I find it very strange that SonarCloud is advertised to work on AzureDevOps, but then seems to show such problems. The VMs have 7GB RAM, 14 GB disk. These limitations are known.
Apparently SonarCloud is not fit to run on stock hosts once a certain SLOC threshold is passed ?
Suggestion on how to tweak the settings so to still obtain a succesfull analysis run ?
Welcome to the community!
I’m sorry you’re having problems. A larger code base quite naturally takes more resources to analyze.
What languages are in the project(s) where you see this? How many files? Lines of Code?
This is in one particular SonarCloud project where we use Sonar for analysis.
There is a total of 149 (C#) projects
Around 18200 C# files, total Lines of Code is 728K
Do I correctly understand that your analysis is trying to deal with nearly a million (~998k) lines of code? If so, I’m proud that it got as far as it did. Yes, you’ll need a beefier build agent, regardless of which DevOps Platform - AzureDev Ops or anything else - you’re working in.
Indeed, we are analyzing about 1M LOC in our integration with SonarCloud.
But, it was already known by SoanrCloud when we’ve opened subscription that allows to process up to 1M LOC. In SonarCloud marketing materials I see “Seamless integration with cloud DevOps Platform”.
While in the documentation I could not find information that analysis in Azure DevOps is limited in number of LOCs. I definitely understand that the more LOCs we have the slower it will be performed, but crash is not what we expected.
Moreover, in worked quite some time, and then it started crashing without any noticeable change in the code or pipeline.
Welcome to the community!
Let’s be clear: it’s not. Put a beefier build agent under this and it will work just fine.
Okay… that could be due to improvements in our detection requiring more resources during the analysis. Are you able to pinpoint when it started failing?
I see that we’ve disabled Sonar 22 November 2022.
But probably it started failing one week earlier or something like this.
To check which steps it fails, we’d need the debug logs of the END step.
Share the Scanner for .NET verbose logs
/d:"sonar.verbose=true" to the…
dotnet sonarscanner begin command to get more detailed logs
- For example:
SonarScanner.MSBuild.exe begin /k:"MyProject" /d:"sonar.verbose=true"
- “SonarQubePrepare” or “SonarCloudPrepare” task’s
extraProperties argument if you are using Azure DevOps
- The important logs are in the
END step (i.e. SonarQubeAnalyze / SonarCloudAnalyze / “Run Code Analysis”)