We are having exactly the same trouble. The last lines in our analyze log looks like the following:
09:28:37.209 DEBUG: Found 2 SINK specifications for method 'log4net.ILog.InfoFormat(System.IFormatProvider, string, params object[])' while expecting a single one.
09:28:37.229 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
09:28:37.230 DEBUG: Did not expect to visit symbol class com.sonar.security.E.D.B.M.
After some minutes we get exception:
09:29:23.140 ERROR: Error during SonarScanner execution
java.lang.OutOfMemoryError: **Java heap space**
at java.base/java.util.stream.ReduceOps$3.makeSink(ReduceOps.java:180)
at java.base/java.util.stream.ReduceOps$3.makeSink(ReduceOps.java:177)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at com.sonar.security.E.D.A.H.A(na:3277)
at com.sonar.security.E.D.A.H.A(na:3114)
at com.sonar.security.E.D.A.H.A(na:3277)
at com.sonar.security.E.D.A.H$$Lambda$1575/0x0000000801b67650.apply(Unknown Source)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
at java.base/java.util.stream.Collectors$$Lambda$175/0x000000080116eda8.accept(Unknown Source)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1858)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at com.sonar.security.E.D.A.H.A(na:3277)
at com.sonar.security.E.D.A.H.A(na:3114)
at com.sonar.security.E.D.A.H.A(na:3277)
at com.sonar.security.E.D.A.H$$Lambda$1575/0x0000000801b67650.apply(Unknown Source)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
at java.base/java.util.stream.Collectors$$Lambda$175/0x000000080116eda8.accept(Unknown Source)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1858)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at com.sonar.security.E.D.A.H.A(na:3277)
Process returned exit code 1
The SonarScanner did not complete successfully
09:29:23.424 Post-processing failed. Exit code: 1
We are using the following environment vars on our server:
Iām sorry for the delay. Meanwhile I found out what was causing the heap memory error. But the solution is not the very best.
For our project in SonarQube I created a new profile and I started to exclude one rule after the other in the hope to get SonarQube to run.
After hours investigating time I came to the result that I have to exclude the following rules:
SECURITY_HOTSPOT S6350 MAJOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S2083 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S2091 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S5135 BLOCKER roslyn.sonaranalyzer.security.cs
VULNERABILITY S5145 MINOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S5883 MINOR roslyn.sonaranalyzer.security.cs
VULNERABILITY S6096 BLOCKER roslyn.sonaranalyzer.security.cs
I seems that these rules are causing the heap memory error.
Since beginning of this year we updated our SonarQube server to āDeveloper Edition Version 9.8 (build 63668)ā
Iām sorry I canāt provide you a log file. After each exclusion of a rule from profile I started our build, checked if the heap error still exists and so on. But I didnāt save the logs
For me Itās clear that this version of SonarQube or the rules above are faulty and have to be fixed.
Today we will discuss what we are doing. For sureā¦exclusion of rules to get SonarQube to run canāt be the solution.
For your information, all of the rules you identified are security-related and raised by the same analyzer. Therefore, there may be a problem with the security analyzer.
I would be happy to help you investigate the problem further. Could you please help me help you by providing the following information?
You say that you are using SQ 9.8. Which version did you use before, and can you confirm that the problem did not occur in that version?
What is the highest memory limit setting you have tried with SONAR_SCANNER_OPTS? Was it -Xmx8g?
How many LOC does your projecty have?
Could you please run another scan and provide me with the full logs?
Before the SQ update we were using V9.7.1.62043, without any trouble. And we were using the SQ ādefault profileā in our project. Since we updated SQ we ran into trouble. Always saying that there is not āenough heap spaceā
Our build server was configured only with 4GB RAM. So I opened a IT-ticket to request 8GB RAM for our system. The result was the same. The only difference was that it took some seconds / minutes longer till the heap error message occurred.
Then I decided to use my developer pc (Intel I9, 16GB RAM) for test builds. And I was playing with environment variable SONAR_SCANNER_OPTS. The maximum Heap I reserved was 12G. So I configured the SONAR_SCANNER_OPTS first with -Xmx4g, then -Xmx8g, then -Xmx12g. The problem was still the same.
Sorry for my question. Wat do you mean with LOC?
Maybe the following info is also important. Before SQ-update we were using:
I have a suspicion, but before I get into it, I need a little bit more information.
LOC is the number of āLines of Codeā. If you go to your SonarQube instance and to the overview page of your project, there is a āProject Informationā link on the upper right. It tells you how many lines of code your project has (according to SQās algorithm).
Also, if you can share this, could you let me know what are the dependencies (i.e., NuGet packages used) by the project where this problem occurs?
sorry for the late answer. I think I know what the problem is.
In SQ 9.8, we introduced improved support for some NuGet packages. That is, in the internal SAST engine (i.e., the sonar-security analyzer), we introduced a precise model of the architecture, API, and behavior of some popular libraries available in NuGet.
This means that the SAST engine is much better at finding vulnerabilities in C# code, as it can perform a much deeper analysis of the user code: It can better understand and follow the flow of data through the code as the code interacts with libraries.
Among the NuGet packages for which we added improved support was System.Net.Http, which is one of the libraries also used by @MichaelK .
Unfortunately, a deeper and more precise analysis also implies an increased memory footprint, as the engine now deals with more information than before. For a project with 253k lines of code (thatās a quarter million!), I think that 4 GB of RAM is just not enough. Modern PCs easily have 16 GB, if not 32 GB of RAM, and for jobs in the cloud, it can be significantly more than that.
I believe that, if you give the job a more sizeable amount of memory (say, 32 GB), it should be fine. I understand this is a considerable increase compared to before. In exchange, you get a significantly improved analysis with a much higher chance of detecting actual security vulnerabilities in your code!
I hope this helps. If you run the job with 32 GB of RAM and it is still failing, please let me know!
FYI, we have started hitting this problem with a project with only 72k lines but it does not happen on another project with 520k lines or indeed one with 1.6 million lines of code.
Since it does not trigger on these significantly larger projects, which use the same techonlogies and libraries, it does not seem like this is an expected outcome due to your improvements to the engine.
It seems more like a memory leak triggered when a couple of rules fire IMO which is why it isnāt happening everywhere.
If this was genuinely expected, an 8x increase in memory footprint as you are suggesting in this case seems like a big ask - certainly something that should be observed in testing and widely advertised in the release notes as something upgraders need to be aware of.
Maybe I did not express myself well in my last post. I just wanted to clarify that, in general, with a project of over 250k LOC, 4 GB of RAM is often not enough, and if was in the past, that could mean that the analysis was quite superficial due to a lack of the analyzerās understanding of the codebase.
We do not, in general, expect an 8x increase in memory footprint, nor did we such an increase in our testing.
I agree with your argumentation: The fact that you start seeing this with another 72k LOC project indeed seems to indicate there might be a memory leak somewhere (no confirmation yet, we have to investigate). We would be happy to have a closer look to help you and of course, improve our analyzer in general.
One of my colleagues agreed to have a closer look into this and will reach out to you soon. We will need some additional information to help us investigate the issue.
Hi @Malte
I wasnāt suggesting that 4GB is enough memory, more that 8x increase does not seem a great solution
We tried 32Gb today and, as expected, it didnāt help anyway.
No need to add another colleague to the mix - weāve already raised the issue with support (SUPPORT-36764) and they pointed us to your āsolutionā in this threadā¦
I posted here as an additional data point for you helping the other interested parties in this thread.
We have reported back to support that 32gb did not work so hopefully it will be picked up further there.
It is a fine balance between paid for (and hopefully expedited) support that only the customer gets to know about and community based support which can potentially benefit all :}
Hi - we had to disable some rules.
We have been assured the issue is fixed in 10.0 but unfortunately havenāt had the opportunity to upgrade to that verion yet