Clarification on Pull Request / Branch Analysis speed

Hello,
I have a project using latest SonarQube community doing post-merge analysis on our master branch and considering upgrading to a paid version to run analysis on pull requests on Github Enterprise and to prevent merge when new issues are found or coverage is below a threshold. The thing that doesn’t seem clear to me is the depth of analysis and speed of pull-request / branch analysis.

Incremental Analysis seems to suggest that paid editions will still run the full analysis to decorate pull requests
How to configure SonarQube (preview mode) to comment less verbose however seems to suggest that the branching feature should only do analysis on modified parts (just files? just lines?)

Can you please clarify which of these it is and expectations for performance on the pull-request decorations with regards to the full analysis done by the community version?

(I realize that I can just get a trial and find out myself, but I figure having this posted here will help others and save time)

Hi,

Welcome to the community!

The first thread you link to is generally correct. Specifically,

However, it is worth noting that in the intervening 2 years, we added analysis caching for C, C++ and Objective-C only.

The second post you link to is about a feature which was dropped before the current LTS. Nonetheless, IIRC, analysis also covered the entire code base even with that feature.

Regarding the speed of analysis, that’s going to depend entirely on the size of your project and the resources available to analysis.

I have seen some people do some scripting tricks to limit the sonar.sources value for a PR to only those files changed during the analysis. Aside from being fiddly doing it this way runs the risk of missing some issues, so we don’t recommend it.

 
HTH,
Ann

1 Like

I think that answers most of my questions,

To clarify, I wasn’t asking for the actual speed, just a relative speed compared to how long our current full analysis does in the community version.

Unfortunately that time is too long right now to make it a mandatory check to pass on each pull-request, so that will likely not be an option. I see 3 options (not mutually exclusive):

  1. do scripting tricks with sonar.sources on pull-requests. Accept the risk and mitigate it other ways.
    e.g. have and additional full-analysis done on the branch after each successful merge or periodically. Open tickets to address any issues found that weren’t caught in the pull-request analysis with its reduced scope.

  2. Improving hardware / infrastructure to speed up builds such that time for full analysis on each pull request becomes reasonable.

  3. Refactoring code to improve build / analysis speed (e.g. break apart code into smaller, individually build-able modules)

Let me know if there’s other options I’m overlooking. Thanks!

FYI: My company has several projects we are looking to do this on, but the main two are one .NET project (~25 minutes on over 750,000 LOC) and a Java project (~15 minutes on over 250,000 LOC)

Hi,

You’ve covered most of the options. There is one thing that might help a little. You’ve probably already done it but it’s worth mentioning: is there any code you can eliminate from analysis? E.G. libraries & generated code?

 
Ann

I believe we’ve already done that, but good for us to double check. Thanks.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.