Java analysis times increased from 3min to 11 mins with Sonarqube 8.9/9.1

We have recently upgraded from SonarQube 8.4.2 to SonarQube 8.9.1. On 8.4, scans during ci builds take ~3 mins. With 8.9, they take ~11 mins. We run a couple thousand builds per week so this is a significant increase.

I have found the main issues to be slow analysis of files. At times, there have been even more significant increases in scan times where some files are taking 8-10 or more seconds per file which causes builds to take hours. The below numbers occur on separate sonar infrastructure with no other builds running against it. The below build was run using sonar-maven-plugin: and SonarQube 9.1.

17:15:42  [INFO] 97/97 source files have been analyzed
17:15:42  [INFO] Slowest analyzed files:
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/eventAppContent/ (3614ms, 19102B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/ (3489ms, 35948B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/eventAppContent/ (3466ms, 100169B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/eventAppContent/ (3332ms, 16378B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/dao/ (3074ms, 3930B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/ (2878ms, 2430B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/ (2504ms, 13938B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/ (2483ms, 10919B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/ (2478ms, 10150B)
17:15:42      falcon-planner-service/src/test/java/com/cvent/falcon/planner/services/eventAppContent/ (2303ms, 34011B)
17:15:42  [WARNING] Unresolved imports/types have been detected during analysis. Enable DEBUG mode to see them.

I also see the below line in the logs but I’m not sure if this is a separate step or it includes the scan time of the slow files.

17:17:03  [INFO] Sensor JavaSensor [java] (done) | time=80455ms

This only happens for Java projects in Jenkins. Running the same command locally finishes in 1-2 mins. We also have JS projects with no issue.

We have tried:

  • increasing the build machine cpu/memory
  • increasing the sonar host cpu/memory
  • increasing/decreasing the background worker threads
  • increasing iops of the build machines
  • not using docker volumes with the sonar host
  • using the default Sonar Way rules

Also we’ve tried increasing the java memory options in the docker container to the following.

  • sonarComputeJavaOpts: ‘-Xmx16G -Xms8G’
  • sonarWebJavaOpts: ‘-Xmx4G -Xms2G’
  • sonarSearchJavaOpts: ‘-Xmx4G -Xms4G’

Any suggestions on decreasing these scan times?

1 Like


Can you share the number of LOCs you have on your project so we can check if the performances you are observing matches our baselines?


I missed that part when I first read your post. Do you confirm you have good performance (1-2 mins) when you run the scan locally and not when the scan is running on a server running Jenkins (11 mins)?

In that the case, I believe you are the best to determine what’s different between these 2 machines.
Did you check the JDK installed? Are you running with the same OS? The memory allocated to Maven itself (MAVEN_OPTS)?

I can also already say that the following parameters don’t impact the performance of Java files scanning:

  • sonarComputeJavaOpts, sonarWebJavaOpts, sonarSearchJavaOpts
  • the sonar host cpu/memory
  • background worker threads


Thanks for the response.

From the sonar ui this project has:
15760 Lines of Code
5169 Lines to Cover

I double checked the local vs Jenkins. I get roughly the same time when running the same command. 54s in Jenkins, and 1:30 locally. But the times above include another integration test file.

But on the same infrastructure in Jenkins with version 8.4.2, scans are taking 3 mins. And with 8.9 they are taking 11 mins. I did not do a vacuum on the db after upgrading but I can try that.

Also I was talking more to the build team and they are seeing the same projects increase in time the longer the scans have been enabled. Initially, we see the times above, then after 6-8 hours or so, these scan times increase to over an hour.

We recently switched to using EBS docker volumes for:

  • /opt/sonarqube/data
  • /opt/sonarqube/logs
  • /opt/sonarqube/extensions
    But I don’t see a difference in scan times if I remove these mounted volumes and just run the docker container on ECS. I’m going to try increasing the IOPS here.

If you have any suggestions, please let me know.


Hello @robertcvent,

I must admit that I’m a bit lost with all the information you shared. In your initial post you said that the scan took 11min, another time 54s. Where is the truth?

I propose to reset the discussion and focus only on the Analysis Duration Time, so on the time spent by the scanner running in your case in Jenkins.

It would be great to share:

  • the full logs
  • the Analysis Duration Time
  • the full command used to trigger the scan

Once again, the slowness looks to be on scan side according to the snippet of logs you shared, so it’s useless to look at the configuration of the SonarQube server.

My suggestions are the following:

  • run the scan using JDK11: it was observed that it helps
  • deactivate the rules having the tag “tests” to confirm that the problem is coming from a rule which applies only to test files (or use {{sonar.test.exclusions=**/*}} to remove all test files from the analysis scope)


Note: for a project made of 15760 LOCs, the expectation is to be able to scan it in less than 5min

Once again, the slowness looks to be on scan side according to the snippet of logs you shared, so it’s useless to look at the configuration of the SonarQube server.

Thanks. That was my initial thought also. We increased the cpu/memory of the build machines but that did not have any effect so I’ve been focusing on the host lately.

The 54s is from a standalone Jenkins job I set up specifically for testing this issue. The same project in our build pipeline takes 11 mins for the mvn sonar:sonar command.

I tried the following yesterday:

  • increasing iops of the docker volume
  • vacuuming the db after upgrade
  • removing integration test report files
  • different infrastructure setup without using ECS for the sonar host
  • different MAVEN_OPTS in the sonar:sonar command above

The time was 10:30 - 11 mins for all.

I will try your suggested fixes.

Thanks again. Logs below.

54s standalone job on Sonar 9.1
54s-standalone.txt (37.7 KB)

~11 mins in Sonar 8.9
Sonar-8-9.txt (40.5 KB)

~3 mins in Sonar 8.4
Sonar-8-4.txt (27.5 KB)

~12 mins in Sonar 9-1
Sonar-9-1-Java-11.txt (47.5 KB)

In our build pipeline, there is a Sonar stage. If we enable this stage, these scans take 11 mins. After about 8 hours or so, some projects scans go up to over an hour or more. This is our primary issue. We changed our Sonar infrastructure to use ECS about a month before seeing this issue.


Given the information you provided, my conclusion is that there is no performance problem on Scanner side to investigate nor any problem on SonarQube server side. It takes only 54s to scan your 15760 Java LOCs, we can’t go faster.
The good performance are achieved only when you are scanning using a local instance of Jenkins which I guess is using a local filesystem.

For me, there is an I/O configuration problem on your Jenkins instance relying on ECS. You should contact the AWS support.