Need help with eliminating second (likely redundant) build+test steps in SonarCloud/Azure DevOps

Hi all! We could use some advice on how to implement a fast, lean and mean pipeline using SonarCloud to act as quality gates for an Azure DevOps pipeline.

Template:

  • ALM used: Azure DevOps
  • CI system used: Azure DevOps
  • Scanner command used: See excerpt below (but basically all of SonarCloudPrepare, SonarCloudAnalyze, SonarCloudPublish)
  • Languages of the repository: asp.net core / C# / and of course yaml for Azure DevOps
  • Other template items are not applicable

We’ve been using SonarCloud for some time now, and we’ve enforced some standard quality gates into the pipeline at Pull Request level. I.e. it’s not possible to merge to master without having at least a certain test coverage threshold, not having any smells etc. etc. and this all works great really.

The thing is that we want to reduce lead times of the pipeline, and relatively a lot of time is spent on compiling and running tests (entire pipeline only takes 4 minutes or so to complete, of which compiling/testing takes up about a minute, but does so twice). Also see the following representation of the pipeline, which hopefully doesn’t come as a surprise:

In our attempts to reduce lead time we came to the conclusion that compiling and testing twice, both on PR-level and then again on the master branch is effectively waste, and eliminating this can reduce our lead time by a whopping 25%. However we do not see how to make this work fluently with SonarCloud. As the PR compile+test produce reports compared to the master compile+test. Obviously we want the master to keep producing ‘baselines’ for PR’s to measure against.

Is it possible for SonarCloud to not do the whole analyzing twice, but storing the results and having a separate command or some such to finalize the analyses? Given the provided pipeline-flow, is it possible to have the PR-CI generate and analyze the results as usual, but have the Master CI reuse the analyses that’s already been done and simply mark it as SonarCloud’s new baseline to measure against? And therefore removing the need to compile and test the code twice?

Or otherwise in some better defined criteria:

  • We need to run Sonar at PR-level latest, because we want to fail fast on static code analyses and not introduce unchecked items onto master.
  • We need to have individual PR’s compared to the master branch, and the master branch only
  • We need to be able to build and test once within the entire pipeline, and not redo this time consuming step.

Also below is a simple yaml excerpt that we use in our pipeline, with some irrelevant powershell-steps removed.

Any help/insight would be greatly appreciated!

jobs:

  • job: BuildandAnalyseSonarCloud
    displayName: Build and Analyse in SonarCloud
    pool:
    vmImage: ubuntu-latest
    steps:
    • task: SonarCloudPrepare@1
      displayName: ‘Prepare Analysis Configuration’
      inputs:
      SonarCloud: ‘…’
      organization: ‘…’
      scannerMode: ‘MSBuild’
      projectKey: ‘…’
      projectName: ‘…’
      projectVersion: '(Build.BuildNumber)' extraProperties: | sonar.coverageReportPaths=(System.DefaultWorkingDirectory)/TestResults/SonarQube.xml
    • task: DotNetCoreCLI@2
      displayName: ‘Dotnet test’
      inputs:
      command: ‘test’
    • task: SonarCloudAnalyze@1
    • task: SonarCloudPublish@1

Hi @Jelle and welcome to the community !

No we don’t have this feature currently.

Is the SonarCloud analysis on master really needed on your side to be a gateway before deploying your changes ? Otherwise, i would think about a nightly build, decorelated from your CI/CD which only does the build / testing and SonarCloud analysis so just to keep your master as a good baseline. Is that something you already thought about ?

HTH,
Mickaël

Hi Mickaël, thanks for your response!

We are aiming for a SOC2-compliany, and everything we can enforce by design/automation is what we go for. One of those things is having proper testing guidelines, measures, etc.etc. And we need to make sure that these thresholds are enforced before going to production. That means that

  • We need to do a sonarrun on PR and have that as a gate to finalize the PR. (some more context as to why below)
  • We need to finalize the results on Master so we compare WIP on PR’s against master. Right now we do this by doing building, testing, and analysing on both the PR and master. It works, works good, but it can be better :wink:

As for the nightly, we have thought about this, but we think it’s less desirable than building testing and analyzing twice:
Instead of opting for checks/gates later in the the process, we want to do this as early as possible. The easiest way in our opinion is to simply have only quality approved code on master and then simply push it forward. The sooner you build quality in the cheaper it eventually is of course. For us we choose to have quality as soon as we go to master, instead of having a parallel nightly process which checks the code and then gives feedback - in our opinion - too late (by then the code may already even be in production, developers are working on something else so it introduces context switching to fix these ‘afterthoughts’, definition of done gets muddled, etc.).

Regardless, it would be nice to be able to create an analysis and reuse this to finalize it later as described in the context above. Until we would rather redo the build and test for both master and a PR, as it only delays by about a minute or two, to which the upside is that we get immediate feedback on code quality rather than on a nightly parallel builds and enforcing quality on master rather than fixing it some time later during development.

I think that the problem with this kind of feature is that you cannot be sure that the analysis you will reuse will still be consistent with what is on master.
Simple example : You have 2 Pull Request, each of these produce their own analysis report let’s say. Once it’s merge into master, apart from making sure that the master analysis will respect the order of merge (but that’s not even the full truth), if the latest build is taking more time than the first one, then you will not be able to correlate your report, and potentially have weird results.

With the Azure DevOps extension, we also provide a pre-release gate, which will check the status of the Quality Gate related to the artifact being released (which is tied to a specific pipeline execution), but that’s more complementary than totally replacing a proper analysis of your master branch.

I don’t see right now any big improvement on what you are doing right now, i believe this is the way to go.

Mickaël