SonarQube Community Edition Version 7.9.2 (build 30863)
SQ is running in a Docker container on a VM on CentOS 8.0.1905 with 2 vCPUs & 8GB RAM
SonarQube database is PostgreSQL version 10.6
SonarLint for Eclipse plugin version 3.6.1012
Sonar scanner version 4.2.0.1873-linux
Hello, I started testing with SonarQube a couple months ago and successfully analyzed projects with the scanner and connected SonarLint from Eclipse to the server for on-the-fly analysis. This was using the embedded database.
Now I am ready to deploy a production instance. The new instance is using a postgres database as well as a reverse proxy with apache for HTTPS. The server itself seems to be running smoothly and the scanners have had no issues submitting analysis to the server. However, the SonarLint plugin fails to bind the Eclipse project to the remote SQ project. The issue revolves around a failure of the /batch/issues?key=(project key) endpoint. Anytime I try to bind the project in SonarLint or try to hit the endpoint manually, the whole application seems to lock up and I can’t even access the homepage anymore.
Several minutes after attempting to connect SonarLint to the SQ project, this error shows up in the SonarQube web.log:
ERROR web[o.s.s.p.w.RootFilter] Processing of request /batch/issues?key=(project key) failed
java.lang.OutOfMemoryError: Java heap space
This seems to be a memory error so I bumped up the VM from 4GB RAM to 8GB and updated sonar.properties a couple times to allocate more memory. Here is the final settings I used for the relevant properties:
sonar.web.javaOpts=-Xmx2048m -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts=-Xmx1024m -Xms1024m -XX:+HeapDumpOnOutOfMemoryError
sonar.search.javaOpts=-Xms1024m -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError
Do I need to give the VM/SQ more RAM, or is there something else going on?
Can you precise the number of issues you have for this particular project? Since this is an OOM on a web service, I think the property that matter is sonar.web.javaOpts. Maybe you could try increasing it even more (-Xmx4g for example)
This is a large project - 780k lines of code, 2.5k bugs, 1 vulnerability, & 1.5M code smells. To my understanding SonarLint just needed access to some metadata about the project and the rules being applied - is it actually transferring data about all the issues to SonarLint?
I will try bumping up the web RAM to 4GB as suggested and get back with you. I believe I read somewhere that I should set -Xms and -Xmx (min/max) to the same number - is this correct? I will set both to 4GB.
Yes, this is used to map local issues with remote ones, in order to apply issue suppression (won’t fix/false positive) as well as manual change of issue severity.
Xms is not very important here, this is the initial value.
If the payload returned by the WS is very big, this might cause an OOM on Eclipse side now. Please also try to increase the memory dedicated to Eclipse (in eclipse.ini) to see if that could help you to make it work, waiting for us to investigate a bit more.
I increased the memory allocated in eclipse.ini to 512m (min) and 2048m (max).
I also added these properties per the article you linked:
-XX:+UseParallelGC
-XX:PermSize=256M
-XX:MaxPermSize=512M
It looks like the connection is now successful. I am not finding any errors on the server or client side. I will continue to monitor, but it looks like both sides are happy now.