Heap space error during analysis

We are having exactly the same trouble. The last lines in our analyze log looks like the following:

09:28:37.209 DEBUG: Found 2 SINK specifications for method 'log4net.ILog.InfoFormat(System.IFormatProvider, string, params object[])' while expecting a single one.
09:28:37.229 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.

After some minutes we get exception:

09:29:23.140 ERROR: Error during SonarScanner execution
java.lang.OutOfMemoryError: **Java heap space**
        at java.base/java.util.stream.ReduceOps$3.makeSink(ReduceOps.java:180)
        at java.base/java.util.stream.ReduceOps$3.makeSink(ReduceOps.java:177)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
        at com.sonar.security.E.D.A.H.A(na:3277)
        at com.sonar.security.E.D.A.H.A(na:3114)
        at com.sonar.security.E.D.A.H.A(na:3277)
        at com.sonar.security.E.D.A.H$$Lambda$1575/0x0000000801b67650.apply(Unknown Source)
        at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
        at java.base/java.util.stream.Collectors$$Lambda$175/0x000000080116eda8.accept(Unknown Source)
        at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
        at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1858)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
        at com.sonar.security.E.D.A.H.A(na:3277)
        at com.sonar.security.E.D.A.H.A(na:3114)
        at com.sonar.security.E.D.A.H.A(na:3277)
        at com.sonar.security.E.D.A.H$$Lambda$1575/0x0000000801b67650.apply(Unknown Source)
        at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
        at java.base/java.util.stream.Collectors$$Lambda$175/0x000000080116eda8.accept(Unknown Source)
        at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
        at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1858)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
        at com.sonar.security.E.D.A.H.A(na:3277)
Process returned exit code 1
The SonarScanner did not complete successfully
09:29:23.424  Post-processing failed. Exit code: 1

We are using the following environment vars on our server:

JAVA_HOME="c:\tools\openjdk\jdk-19.0.1"
SONAR_SCANNER_OPTS=-Xmx4g

if I’m changing SONAR_SCANNER_OPTS to -Xmx8g it takes longer till the exception occurs. Changing this environment var does not solve the problem.

For analyzing our sources we are using the following steps on a VM:

"C:\tools\sonar-scanner-msbuild\5.10.0.59947-net46\SonarScanner.MSBuild.exe" begin .....
"C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Current\Bin\MSBuild.exe" ...
"C:\tools\sonar-scanner-msbuild\5.10.0.59947-net46\SonarScanner.MSBuild.exe" end ...
3 Likes

Hi @MichaelK,

I’ve moved your post to a new thread. Can you confirm that you’re on SonarQube? And if so, what version?

And can you provide the full analysis log?

 
Thx,
Ann

1 Like

Hi Ann,

I’m sorry for the delay. Meanwhile I found out what was causing the heap memory error. But the solution is not the very best.

For our project in SonarQube I created a new profile and I started to exclude one rule after the other in the hope to get SonarQube to run.

After hours investigating time I came to the result that I have to exclude the following rules:

SECURITY_HOTSPOT S6350 MAJOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S2083 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S2091 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S5135 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S5145 MINOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S5883 MINOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S6096 BLOCKER roslyn.sonaranalyzer.security.cs

I seems that these rules are causing the heap memory error.

Since beginning of this year we updated our SonarQube server to “Developer Edition Version 9.8 (build 63668)”

I’m sorry I can’t provide you a log file. After each exclusion of a rule from profile I started our build, checked if the heap error still exists and so on. But I didn’t save the logs

For me It’s clear that this version of SonarQube or the rules above are faulty and have to be fixed.

Today we will discuss what we are doing. For sure…exclusion of rules to get SonarQube to run can’t be the solution.

Best regards
Michael

1 Like

Hi Michael,

Thanks for reporting your findings. I’ll make sure the developers see this.

 
Ann

Hi Michael,

thank you for your feedback!

For your information, all of the rules you identified are security-related and raised by the same analyzer. Therefore, there may be a problem with the security analyzer.

I would be happy to help you investigate the problem further. Could you please help me help you by providing the following information? :slight_smile:

  • You say that you are using SQ 9.8. Which version did you use before, and can you confirm that the problem did not occur in that version?
  • What is the highest memory limit setting you have tried with SONAR_SCANNER_OPTS? Was it -Xmx8g?
  • How many LOC does your projecty have?
  • Could you please run another scan and provide me with the full logs? :pray:

Thank you!

Hi Malte,

Before the SQ update we were using V9.7.1.62043, without any trouble. And we were using the SQ “default profile” in our project. Since we updated SQ we ran into trouble. Always saying that there is not “enough heap space”

Our build server was configured only with 4GB RAM. So I opened a IT-ticket to request 8GB RAM for our system. The result was the same. The only difference was that it took some seconds / minutes longer till the heap error message occurred.

Then I decided to use my developer pc (Intel I9, 16GB RAM) for test builds. And I was playing with environment variable SONAR_SCANNER_OPTS. The maximum Heap I reserved was 12G. So I configured the SONAR_SCANNER_OPTS first with -Xmx4g, then -Xmx8g, then -Xmx12g. The problem was still the same.

Sorry for my question. Wat do you mean with LOC?

Maybe the following info is also important. Before SQ-update we were using:

Sonar-Scanner version: 5.3.1.36242-net46
Java JDK V17

Since we ran into trouble we performed the following updates too:
Sonar-Scanner version: 5.10.0.59947-net46
Java JDK V19.01

Best regards

Michael

HeapError_Log.zip (124 KB)

Thank you, Michael, that is helpful.

I have a suspicion, but before I get into it, I need a little bit more information.

LOC is the number of “Lines of Code”. If you go to your SonarQube instance and to the overview page of your project, there is a “Project Information” link on the upper right. It tells you how many lines of code your project has (according to SQ’s algorithm).

Also, if you can share this, could you let me know what are the dependencies (i.e., NuGet packages used) by the project where this problem occurs?

Hi Malte,

In section ‘Project information’ I can see that our project has 253k lines of code.

Regarding dependencies we have a mixture between Nuget-Packages and some other binaries:

Nuget-Packages:

Other dependencies:

Best regards

Michael

Best regards

Michael Koptschalitsch

@Malte Do you have any update on your suspicion? We are seeing a very similar problem with a specific C# project recently

Hey folks,

sorry for the late answer. I think I know what the problem is.

In SQ 9.8, we introduced improved support for some NuGet packages. That is, in the internal SAST engine (i.e., the sonar-security analyzer), we introduced a precise model of the architecture, API, and behavior of some popular libraries available in NuGet.

This means that the SAST engine is much better at finding vulnerabilities in C# code, as it can perform a much deeper analysis of the user code: It can better understand and follow the flow of data through the code as the code interacts with libraries.

Among the NuGet packages for which we added improved support was System.Net.Http, which is one of the libraries also used by @MichaelK .

Unfortunately, a deeper and more precise analysis also implies an increased memory footprint, as the engine now deals with more information than before. For a project with 253k lines of code (that’s a quarter million!), I think that 4 GB of RAM is just not enough. Modern PCs easily have 16 GB, if not 32 GB of RAM, and for jobs in the cloud, it can be significantly more than that.

I believe that, if you give the job a more sizeable amount of memory (say, 32 GB), it should be fine. I understand this is a considerable increase compared to before. In exchange, you get a significantly improved analysis with a much higher chance of detecting actual security vulnerabilities in your code! :slight_smile:

I hope this helps. If you run the job with 32 GB of RAM and it is still failing, please let me know!

2 Likes

FYI, we have started hitting this problem with a project with only 72k lines but it does not happen on another project with 520k lines or indeed one with 1.6 million lines of code.
Since it does not trigger on these significantly larger projects, which use the same techonlogies and libraries, it does not seem like this is an expected outcome due to your improvements to the engine.

It seems more like a memory leak triggered when a couple of rules fire IMO which is why it isn’t happening everywhere.

If this was genuinely expected, an 8x increase in memory footprint as you are suggesting in this case seems like a big ask - certainly something that should be observed in testing and widely advertised in the release notes as something upgraders need to be aware of.

2 Likes

Hey @tbutler , you do make some fair points. :slight_smile:

Maybe I did not express myself well in my last post. I just wanted to clarify that, in general, with a project of over 250k LOC, 4 GB of RAM is often not enough, and if was in the past, that could mean that the analysis was quite superficial due to a lack of the analyzer’s understanding of the codebase.

We do not, in general, expect an 8x increase in memory footprint, nor did we such an increase in our testing.

I agree with your argumentation: The fact that you start seeing this with another 72k LOC project indeed seems to indicate there might be a memory leak somewhere (no confirmation yet, we have to investigate). We would be happy to have a closer look to help you and of course, improve our analyzer in general. :slight_smile:

One of my colleagues agreed to have a closer look into this and will reach out to you soon. We will need some additional information to help us investigate the issue.

1 Like

Hi @Malte
I wasn’t suggesting that 4GB is enough memory, more that 8x increase does not seem a great solution :slight_smile:
We tried 32Gb today and, as expected, it didn’t help anyway.
No need to add another colleague to the mix - we’ve already raised the issue with support (SUPPORT-36764) and they pointed us to your “solution” in this thread…

I posted here as an additional data point for you helping the other interested parties in this thread.

We have reported back to support that 32gb did not work so hopefully it will be picked up further there.

It is a fine balance between paid for (and hopefully expedited) support that only the customer gets to know about and community based support which can potentially benefit all :}

3 Likes

Hello Tony,

I’m experiencing the same issue as you. Could you please share the steps you’ve taken to resolve it?

Thank you.

Hi - we had to disable some rules.
We have been assured the issue is fixed in 10.0 but unfortunately haven’t had the opportunity to upgrade to that verion yet

2 Likes