Best practices to reduce analysis time

Hi,

Our team has recently subscribed to SonarCloud, and were wondering the best practice to go about an issue in our process we are currently facing.

We have successfully integrated the SonarCloud analysis to run in our Azure DevOps CI pipeline:

  • According to SonarCloud, we have about 850,000 lines of code considered in the analysis
  • Our pipeline runs an average of 5 mins without the SonarCloud code analysis, and around 30 mins with the analysis

As you can see the code analysis increases significantly the execution time of our pipeline, and we are facing the following issue below:

  • 3 developers check in code at the same time
  • Each developer’s work will be queued, and only after the pipeline has been ran for each of their work (30 mins each), the pipeline will be ran again for the next developer, including a brand new SonarCloud analysis each time
  • Total time taken for all the aforementioned work to be reflected in the deployed site would be about 1.5 hours

Upon more digging, we have found that the “incremental analysis” feature is just what we need but unfortunately, it is no longer supported nor recommended to use by the SonarCloud team. So in regards to the process explained above, we would like to know perhaps the best practices to implement to reduce analysis time and redundancy in our pipeline.

Looking forward to any help. Thank you.

1 Like

Hi @Bryan_Tan and welcome to the community.

Which language are you analyzing ? Is it C# ?

Given the amount of LOCs you have, an increase of build time is expected (and we apologize for that), but as you are on SonarCloud and we’re continuously improving our analyzers, i have no doubt this duration will decrease over time.

Concerning your scenario, i would have multiple questions before thinking about a solution

  • Is there any specific need to analyze the whole codebase each time a dev pushed its changes ? Could not there be multiple pipelines with proper folder filters that will just analyze the changes ?
  • Are the dev working on the same branch ?
  • What are the metrics you’re interested in the most for those analysis ?

May i ask you where did you ear / read that ?

Thanks.
Mickaël

Hi @mickaelcaro , thanks for reaching out.

The project we are analyzing is built on .net framework, so yes, mostly C#, JS and HTML code.

  • Not everything is required to be analyzed upon each pushed change. Though because the components in our system is interrelated, we cannot be certain of the files that were changed when a developer pushes in code for a certain feature, that’s why we were wondering if some sort of incremental analysis is available.
  • Yes, currently all our developers are working on the same branch. Unless it is a bigger feature, we sometimes branch out for this task and will merge back later.
  • We are interested in all of the code metrics SonarCloud provides :grinning:, but if we had to choose, perhaps the Reliability, Security and Maintainability would be of greater importance.

No problem, and apologies, what I read that it’s available only for SonarQube not SonarCloud. Thanks for pointing it out.

Appreciate the help.

Hi @Bryan_Tan

Thanks for the insights here.

Well to be honest i don’t think there’s a only good one solution here, but maybe a set of improvements to consider.

  • You can use SonarLint in your IDE (if not already the case) to detect many bugs and issues as fast as possible, you can also connect it to your SonarCloud project to have the proper QP activated.
  • As i mentioned it in my previous post, maybe a good thing would be to split your SonarCloud analysis into multiple SonarCloud project, with proper path filters at your pipeline level so that only relevant part (changed parts) of your code are analyzed.

Another question i’d have is : how your devs “recognize” their analysis upon each analysis happening on that same branch ? Do they refer to the commit sha somehow ? Or any other way ?

Thanks and HTH,
Mickaël