We have been using Sonarqube for a long time with great success (thanks for all the hard work). We have gone from Sonar 7.7 using MySQL to 7.9LTS using Postgres. The upgrade process went fine, but since upgrading our analysis from CI has gone from consistently around 5 minutes to 21 minutes. We are using the gradle plugin version 2.5, unchanged during the upgrade (I have also tried 2.7 and 2.8 with the same outcome) executed from Jenkins.
So far, I have checked all memory settings, checked load (IO, CPU and Memory) on the server but when the analysis is taking place there is little to no load on the server. I have tried showing look at extra logging using the --debug flag on the gradle process but cant see any particular problem, other than that’s just how long it takes now.
I wondered if anyone know where to start looking for where the performance problem has been introduced (client or server). I have seen that there is possibly some profiling available on the gradle plugin side but didnt get any output that I could find. Analysis on sonarqube itself seems to be unaffected and executing in the same time as it did before upgrading.
I guess (from the tags) we’re talking about Java? Is this a commercial edition or community? I ask because additional Java rules are available in the commercial editions and that might be a factor.
This is with version 6.0 (build 20538) installed.
We also have a small amount of typescript being analysed using SonarTS version 2.1 (build 4359), it is small in comparison but worth mentioning.
Thanks for the update. SonarJava v6.0 was a pretty huge overhaul of the analyzer to use a different, more accurate/powerful frontend. This comes at a performance cost.
I’ll ping the right team to see if there’s any relevant information you might have that could help us here, since you’re experiencing such performance issues.
some performance degradation is expected, it is hard to say by how much, because it depends on the complexity of the code in the project. Can you please provide the estimate for the size of the project (how many LoC, classes, …), so we can compare with some projects we are scanning in our QA.
When you check the log, can you pinpoint the file, where analysis takes lot of time? We print currently analyzed file every 10 seconds, so slow analysis of particular file would manifest by the file path being printed multiple times.