System requirements

Hey,
I have a quick question and would like to ask for some experiences/recommendations. We have a project with about 790k lines. An analysis with the Maven plugin currently takes around 25–30 minutes. Also, the log shows something like “server used 0 out of X files from cache.”

Now to the question:
What can we tweak to make the analysis run faster, and what do we need to do so that the server actually uses the cache?

Specs:

  • Dev version

  • Container with SQ v2025.1.3

  • 2 cores

  • PGSQL 12.22

I’m not sure what other information might be helpful. If anything else is needed, please let me know :slight_smile:

Hey there.

To improve analysis speed, it’s important to first identify which steps are taking the most time. Check out this post:

Regarding caching, you asked:

What can we tweak to make the analysis run faster, and what do we need to do so that the server actually uses the cache?

In most situations, the server should use the cache automatically, as long as a valid cache exists. According to the documentation:

  1. Before an analysis, the SonarScanner downloads from the server the corresponding cache:
  • For a branch analysis: the cache of the branch being analyzed.
  • For a pull request analysis: the cache of the target branch.
  • Or, as a fallback, the cache of the main branch.

    Branches that are not scanned for more than seven consecutive days are considered inactive, and the server automatically deletes their cached data to free space in the database.

If you’d like specific advice rather than general suggestions, I’d suggest that you share the full scan logs (ensure any sensitive information is removed first)!

Hey,
thanks for the quick reply and sorry I didn’t answer earlier. For some reason, the email must have slipped through.

Regarding the assumption with the scanner:
We actually use the Java scanner 99% of the time. Other shares would then only be Node/JS/TS but with a share of less than 1%.

Regarding the cache:
So the cleanup does make sense, but our “Sprint” branch—where all subtasks end up—is basically pushed almost daily, usually less than 7 days between pushes, and with each push a scan is initialized. Even scans on this branch don’t use a cache.

Regarding the logs:
Uh, I’ll have to check back on that and especially filter the log. At the moment we’re running at debug level, and the log is getting a bit messy ^^
I’ll attach it here as soon as I have feedback from my leads.

Many thanks

Ahh,
times attached

output.txt (109.2 KB)

I’m currently going through the debug log and I notice that Sonar spends about 4 minutes going through commits. As of now, that’s around 118k. Why is Sonar doing this and is there a way to prevent it? For example, by telling it to only look at commits that happened since the last scan?

And i see a lots of:
”Abandoning path due to exception in callee …..”
”Skipping dynamic dispatch because the number of candidates (5) is too high.”

sorry for the spam i try to wrote down every thing i see in the log ^^