Advice on Analysis Strategy for working with ‘clean as you code’ in Azure DevOps

  • ALM: Azure DevOps

  • CI: Azure DevOps YAML pipelines

  • Languages: C#, VB.NET

I’m setting up SonarCloud for the first time on a 500k lines of code, 100 project .NET codebase. I’ve read through the documentation but am still unclear about the best way to set things up and would be very grateful for advice on which of our Azure DevOps pipelines should run the SonarCloud tasks to enable us to work with ‘clean as you code’. Our Git branching and build (YAML) pipeline setup is as follows:

  • A development branch protected by a branch policy from which we create topic branches.

  • Topic branches merged back to development via Pull Requests.

  • A ‘push’ pipeline that runs every time code is pushed to a topic branch (build and unit tests).

  • A ‘pull request’ pipeline that runs as part of the PR process to evaluate both the proposed merge and the actual merge (build, unit tests and integration tests).

  • A ‘nightly’ pipeline that runs overnight on the development branch consisting of build, unit tests, integration tests and end-to-end tests.

  • From time-to-time the need for long-lived branches for bigger pieces of work. PR and pipeline strategy as yet undecided.

As well as any general advice you can give me I have these questions:

  • I realise the ‘pull request’ pipeline needs to run SC, but does the proposed merge and actual merge process cause any issues for ‘clean as you code’? Or does that give you the ‘stuff to fix’ following the proposed merge followed by a report of what wasn’t fixed after the actual merge?

  • Does SC need to run in another pipeline to get the full analysis of the development branch?

  • Can I use existing pipelines with a long-lived branch and PRs or will that mess things up for the development branch?

Many thanks for your help!

1 Like

Hi @Graham and welcome to the community !

Generally speaking, the way of doing SonarCloud analysis is this one :

  • You make changes and developments on a feature branch, you push it and created a Pull Request, there’s a build triggerd with a SonarCloud analysis plugged in.
  • This PR is approved and merged in the target branch : another SonarCloud analysis is done, to compute each new metrics updated according to the merge.
  • If you main (or master) branch remains stale, it really depends on your need, but a new analysis from time to time doesn’t hurt.

So in your case, for a quick feedback loop, i would :

  • Put a SonarCloud analysis on your push pipeline (even though this can be avoided by using Sonarlint on your IDE, if it’s supported.
  • Put a SonarCloud analysis on your PR pipeline.
  • Put a SonarCloud analysis on your nightly, but ran only if a PR has been merged on it.

Important question however here : Is your entire codebase built and tests on each pipeline ? Or do you have filters depending on the changes ?
Depending on the numbers of rules and other parameters, we acknowledge that the full pipeline time can increase by a non negligible factor is this analysis the whole thing each time, and devs will not be happy.

Tell me if that suits your need with those advices.

Answering your questions now :

If i understood correctly, you are doing 2 checkouts on the same pipeline, and building / testing both of them ? SonarCloud analysis, on pull request, is done on the commit hash that has been checked out, so depending on which one you have on your file system at that time, it will be “the one”.

Depending on the answer you will give me above, i will reply to you on that. Normally, there’s no need.

It depends on how you setup things on Azure DevOps (whether you use the same pipeline for PR triggers and “branch” triggers). We rely on variables set by the build agent, it then should be ok.

HTH,
Mickaël

2 Likes

Many thanks for all that great advice Mickaël! Rather than pose more questions now I’ll get going on implementing your suggestions and see what happens. I’ll update the thread with results. Cheers!

1 Like