I have been running the analysis of our projects every night for several years. I recently upgraded from the november release to the december release. Since then our largest project is failing to be analysed. It is blocking on an OutOfMemoryException.
[INFO] 99% analyzed
[INFO] 99% analyzed
[INFO] 99% analyzed
[INFO] 99% analyzed
[ERROR] [stderr] Exception in thread "Report about progress of Java AST analyzer" java.lang.OutOfMemoryError: Java heap space
[ERROR] [stderr] at org.sonar.java.ProgressMonitor.run(ProgressMonitor.java:77)
[ERROR] [stderr] at java.base/java.lang.Thread.runWith(Unknown Source)
[ERROR] [stderr] at java.base/java.lang.Thread.run(Unknown Source)
[INFO] Slowest analyzed files (batch mode enabled):
jet.phoenix.base/src/main/java/jet/phoenix/ui/task/invoice/InputInvoiceNut3.java (28699ms, 309828B)
[INFO] Did not optimize analysis for any files, performed a full analysis for all 8082 files.
[ERROR] Error during SonarScanner Engine execution
java.lang.OutOfMemoryError: Java heap space
at org.sonar.java.model.location.InternalPosition.atOffset(InternalPosition.java:40)
at org.sonar.java.model.InternalSyntaxToken.<init>(InternalSyntaxToken.java:47)
at org.sonar.java.model.JParser.createSyntaxToken(JParser.java:495)
at org.sonar.java.model.JParser.firstTokenIn(JParser.java:454)
I have tried increasing the memory using the arguments in the configuration file sonar.properties.
That tunes the memory settings on the server side. You need to tune the settings on the analyzer size. You didnât mention which scanner you use, and it varies slightly. Hereâs help for SonarScanner for Maven and for SonarScanner for Gradle.
I am using the SonarScanner for Maven piloted from Jenkins.
It is unclear to me where the export of the environment variable should be done : export SONAR_SCANNER_JAVA_OPTS=â-Xmx512mâ
I have tried Adding that to a shell script executed before the build. But this has had no effect. Presumably it is not running in the same shell process.
I have also tried adding this to the maven command line that is to be executed : -Dsonar.scanner.memory=8192 -Dsonar.java.opt=â-Xmx4G -Xms2G -XX:+HeapDumpOnOutOfMemoryErrorâ
This is an old freestyle project that has evolved over the years. So I do not think I can show a pipeline. I can share a screenshot of the âInvoke top-level Maven targetsâ configuration if that helps?
I now see this in the logs : [INFO] MAVEN_OPTS= -Xmx4G -Xms1G. So the argument seems to be used at runtime.
Unfortunately I still get an out of memory error at exactly the same point :
After the â[INFO] 91% analyzedâ log things slow down enormously and eventually I get the out of memory error after a few 99% logs.
When I view the system processes there are many processes owned by the jenkins user that are runnnig close to the 4G limit, but even if I increase the limit to 8G they do not use more. There are lots of processes owned by the sonarqube user but they hover around 800Mb, 500Mb, 130Mb, so nowhere close to the 4Gb limit. All these processes have different limits, but I have no idea which one is going over the limit. And how to change the limit for that process.
None of the memory settings I have been changing have had any effect at all on the symptoms.
This is similar to something i faced , so its worth a try âŚ..
The dataflow bug detection rules consume too much memory . most probably a bug, to rule this out you can deactivate these rules in your profile and retry analysis. you can identify these rules by searching âdataflow bug detectionâ in the âRepositoryâ section of the rules.The rule prefix is 'javabugsâ.
Thanks for your reply. Unfortunately I do not have any âdataflow bug detectionâ rule. I donât think the javabugs rules are available in the community edition.
I am still trying to figure out which process is running out of memory so I can verify that the memory arguments are being passed on to that process.
Iâm not sure why I didnât ask this before - new year, new ideas - can you enable debug logging (add a -X Maven option) and bump the memory allocation some more?
Either that will give us tons of logging and work, or it will give us tons of logging and a better idea of the failure point.
I added the -X argument, still not sure what process I need to add more memory to.
This is what is happening just before the error :
[DEBUG] 'jet.phoenix.base/src/main/java/jet/phoenix/ui/task/invoice/InputInvoiceNut3.java' generated metadata with charset 'UTF-8'
[INFO] 99% analyzed
[INFO] 99% analyzed
[INFO] 99% analyzed
[INFO] 99% analyzed
[INFO] Did not optimize analysis for any files, performed a full analysis for all 8084 files.
[DEBUG] Cleanup org.eclipse.jgit.util.FS$FileStoreAttributes$$Lambda/0x00007f325035c658@382c90c2 during JVM shutdown
[ERROR] Error during SonarScanner Engine execution
java.lang.OutOfMemoryError: Java heap space
at org.eclipse.jdt.internal.compiler.ast.ReferenceExpression.copy(ReferenceExpression.java:125)
at org.eclipse.jdt.internal.compiler.ast.ReferenceExpression.cachedResolvedCopy(ReferenceExpression.java:974)
at org.eclipse.jdt.internal.compiler.ast.ReferenceExpression.isCompatibleWith(ReferenceExpression.java:1257)
at org.eclipse.jdt.internal.compiler.lookup.PolyTypeBinding.isCompatibleWith(PolyTypeBinding.java:42)
at org.eclipse.jdt.internal.compiler.lookup.Scope.parameterCompatibilityLevel(Scope.java:5060)
at org.eclipse.jdt.internal.compiler.lookup.Scope.parameterCompatibilityLevel(Scope.java:5019)
at org.eclipse.jdt.internal.compiler.lookup.Scope.computeCompatibleMethod(Scope.java:864)
at org.eclipse.jdt.internal.compiler.lookup.Scope.computeCompatibleMethod(Scope.java:804)
at org.eclipse.jdt.internal.compiler.lookup.Scope.getConstructor0(Scope.java:2473)
at org.eclipse.jdt.internal.compiler.lookup.Scope.getConstructor(Scope.java:2436)
at org.eclipse.jdt.internal.compiler.ast.Statement.findConstructorBinding(Statement.java:555)
at org.eclipse.jdt.internal.compiler.ast.AllocationExpression.resolveType(AllocationExpression.java:504)
at org.eclipse.jdt.internal.compiler.ast.LocalDeclaration.resolve(LocalDeclaration.java:402)
at org.eclipse.jdt.internal.compiler.ast.LocalDeclaration.resolve(LocalDeclaration.java:258)
at org.eclipse.jdt.internal.compiler.ast.Statement.resolveWithBindings(Statement.java:503)
at org.eclipse.jdt.internal.compiler.ast.ASTNode.resolveStatements(ASTNode.java:692)
at org.eclipse.jdt.internal.compiler.ast.Block.resolve(Block.java:143)
at org.eclipse.jdt.internal.compiler.ast.Statement.resolveWithBindings(Statement.java:503)
at org.eclipse.jdt.internal.compiler.ast.ForStatement.resolve(ForStatement.java:445)
at org.eclipse.jdt.internal.compiler.ast.Statement.resolveWithBindings(Statement.java:503)
at org.eclipse.jdt.internal.compiler.ast.ASTNode.resolveStatements(ASTNode.java:692)
at org.eclipse.jdt.internal.compiler.ast.Block.resolveUsing(Block.java:154)
at org.eclipse.jdt.internal.compiler.ast.TryStatement.resolve(TryStatement.java:1126)
at org.eclipse.jdt.internal.compiler.ast.Statement.resolveWithBindings(Statement.java:503)
at org.eclipse.jdt.internal.compiler.ast.ASTNode.resolveStatements(ASTNode.java:692)
at org.eclipse.jdt.internal.compiler.ast.AbstractMethodDeclaration.resolveStatements(AbstractMethodDeclaration.java:734)
at org.eclipse.jdt.internal.compiler.ast.MethodDeclaration.resolveStatements(MethodDeclaration.java:386)
at org.eclipse.jdt.internal.compiler.ast.AbstractMethodDeclaration.resolve(AbstractMethodDeclaration.java:633)
at org.eclipse.jdt.internal.compiler.ast.TypeDeclaration.resolve(TypeDeclaration.java:1446)
at org.eclipse.jdt.internal.compiler.ast.TypeDeclaration.resolve(TypeDeclaration.java:1575)
at org.eclipse.jdt.internal.compiler.ast.CompilationUnitDeclaration.resolve(CompilationUnitDeclaration.java:661)
at org.eclipse.jdt.internal.compiler.Compiler.process(Compiler.java:811)
It is interesting that Sonar says it has finished the analysis of all the files. And has an OOM error after that. 8084 is about right for the number of files analysed, so I believe it is quite possible that it has actually arrived at the end. What is happening after the analysis to cause the problem? Do you need the full file (3Mb)?
This is very interesting. It looks like it fails during cleanup. Itâs quite possible the full log will be needed, but Iâm going to flag this for the team and let them request it if they need it.
Iâve been looking into this, and Iâd like to dig deeper into why the analysis is failing at the final stage. To help me move forward with the investigation, would you mind sharing a few more details about your setup?
It would be very helpful if you could provide:
The full analysis log: Even if the file is large, the complete output could be useful for pinpointing the issue.
The command and environment: Knowing the exact command used to run the analysis, along with any environment variables (like MAVEN_OPTS or SONAR_SCANNER_OPTS), would really help me understand the context.
All these changes in the memory configuration have had no effect. It always runs out of memory after 99% of the analysis. Though I do usually get the message :
[INFO] Did not optimize analysis for any files, performed a full analysis for all 8084 files.
Which seems to indicate the analysis has finished.
The progress does seem to slow down massively towards the end, which does seem to suggest that it is strugling with memory and the garbage collector is working hard. But I have not been able to diagnose that any further as I do not know where to look. I can see many java processes running on the machine, none of them reach the 4Gb limit I set in the configuration.
I have been running the analysis on this code base for many years. The number of files analysed has shrunk from about 12000 to 8000. The number of lines of code has shrunk from 1.5 million to just over a million.
Not sure what else I can provide. If helpful I can be available over Teams (or other meeting apps).