What should we look at next? Our project is a monorepo with about 2M LOC. The incremental builds used to take 30 seconds until last year or so, but have gradually shot up to 17 minutes for simple PRs.
There were some specific performance improvements for these rules in SonarQube v2025.1 LTA. I suggest upgrading and seeing if the issue persists.
I’d also suggest you make sure that your target branch has been recently analyzed. During a PR analysis, the cache of the target branch is used to speed up analysis. However…
Branches that are not scanned for more than seven consecutive days are considered inactive, and the server automatically deletes their cached data to free space in the database.
Hey Colin, Thank you for writing in! I’ll ask the team to upgrade Sonar to the version specified.
With regard to the target branch being analysed, I checked, and the target branch is being analysed regularly (on PR merge, and finally on a schedule).
About 15 minutes from that particular sensor. 40 mins is for the main branch’s analysis.
For a PR, it’s still the same at 15 minutes, not 40 minutes (I confused it with the main branch’s analysis):
I also see that server side cache for analysis was used:
[INFO] Server-side caching is enabled. The Java analyzer was able to leverage cached data from previous analyses for 592 out of 619 files. These files will not be parsed.
I found many such lines, and it appears that cache was used well.
thank you for reporting this problem! We are sorry for causing you this inconvenience and will try to resolve it.
To reproduce and debug the issue, it would be helpful for us if you could share some additional information:
Is your project open source? If yes, could you share with us a link to the project?
If it is not open source, you could instead privately share with us some files that are generated by the analyzer. The files are located in a folder ir/java.
There should be a line in the log that reports the exact path for the folder. I.e. it should look like this: [INFO] 11:39:19.350 Reading IR files from: /some/path/target/sonar/ir/java
I will open a private conversation with you outside of this thread for transferring these files.
It might also be helpful if you could share the amount of memory available on the machine where your project is scanned.
Lastly, if this issue is blocking you, you could temporarily disable the particular analyzer that is causing the delays.
For this, you can set the flag sonar.internal.analysis.dbd=false. For instance if you are using the maven scanner then you can pass the argument -Dsonar.internal.analysis.dbd=false to achieve this.
Let me know if you are using a different scanner, in which case I can provide you with the specific instructions for that scanner.
hey Anton, thanks for checking in. We’re going to try to disable it and see how it reacts. However, what does the dbd scanner actually scan for? From a cursory search, it appears to be DB related, and we don’t actively use any DB that Sonar is compatible with (MongoDB and Redis being used).
The Dataflow Bug Detection analyzer (DBD) is scanning for advanced bugs that require tracing the flow of data across methods and files, and understanding complex logical and arithmetic conditions. It is not DB-related.
You can disable it until we resolve the issue if the analysis delays are blocking you.
But in general, I would recommend against disabling it since it is required for the more advanced rules.