Performance guide for large project analysis

Nice thread. At this stage though I’d like to point out that any comparison that restricts itself to LOC versus analysis_time is likely to be unreliable, and not give any interesting outcome. Couple of reasons why I’m pointing that out, all around the fact that a SonarQube analysis of a project involves some many components/factors:

  • obviously the versions of SonarQube and of analyzers, as performance is continuously improved
  • the actual server-side configuration: which rules are activated in the Quality Profile ? is duplication detection enabled ? etc.
  • the number of extensions installed in the environment: custom plugins ? coverage import ? etc.

All of those factors contribute to the overall analysis and can make a difference in terms of timing. So whenever looking at performance aspects I would suggest to take a pragmatic approach.

Understand what is taking time

The analysis is made of the client-side scanner run, and the server-side Background Task. Understanding long execution varies depend on what takes time:

  • client-side scanner run: enable debug logs (sonar.verbose=true) with timestamps , and nail-down the piece which takes time. If it’s the actual code analyzer doing its job, than that part can indeed grow with the volume of the codebase
  • server-side background task: check the state of resources (CPU/RAM/IO), see if database interactions aren’t slow for some reason etc. Verbose logs can also help narrow-down the lengthy part.

Monitor

Whatever the context, the minute one starts to look into performance, than monitoring comes in pair. There’s a good initial guide in the documentation. And ultimately those are pure monitoring/operational considerations for a Java application, i.e. first understand whether your performance feeling relates to system performance (CPU/RAM/IO), or application performance (product itself, but also interaction with other components like database),

4 Likes