we’re using SonarCloud analysis for every pushed git commit. So for every commit there’s > 4000 files scanned, which takes ~ 10 minutes per analysis.
Is there a possibility to cache the analysis results in order to only rescan files that changed since the last scan and therefore achieve significant performance improvements?
If not, is there any other way in improving the speed of the analysis? The project has ~ 100.000 lines of java code and the scan runs on a machine with 2 CPUs and 3 GB memory.
I believe you are talking about what we call “incremental analysis”, you can therefore have a look here:
Concerning other way to improve performance, I don’t see any quick win in your case.
As pointed out in the other thread, there is already a ticket with the goal of speeding up SonarJava 6.x and the team will work on finding a solution.
yes, the “incremental analysis” sounds exactly what I’m looking for.
We’re using SonarCloud and I’m not sure which version this actually means. I wasn’t able to find any information on that. Is there a way to see to which SonarQube version SonarCloud is actually running?
While we are working to implement incremental analysis on PR for Java, we also work to improve the raw analyzer speed.
We just released a new version which accept a property sonar.java.experimental.batchModeSizeInKB activating an experimental feature keeping more data in memory which has a significant impact on the speed of analysis for Java projects (between 10 to 90% increase according to our tests).
Would you be OK to test this property and share a before / after comparison?
You can start with sonar.java.experimental.batchModeSizeInKB=1024 and increase it gradually.