SonarCloud for monorepo - best practise


We are using SonarCloud with Azure DevOps. Our project is in .NET one repository (monorepo). I’m thinking of the best solution for implementing Sonarcloud for our scenario. Curentlly, the CI is running in parallel using separate project files or targeting few project files inside solution. This way we have 6 separate builds.
Also what is important, we care about implementing custom Roslyn analysers so we need to use SonarCloud with MSBuild.

So for implementing Sonarcloud and to get all the benefits (PR decoration etc.), I see 2 possible ways:

  1. Use sonarCloud with monorepo scenario, create SC project for each build
    1.1 Big disadvantage is that share libraries/code are not being gathered together and we end up with having almost twice more LOCs, that we should have.
    1.2 Adv: we could just build change projects, build would be faster.
  2. Sonarcloud with 1 project for 1 big solution, that keeps all projects.
    1.1 DisAdv: Build will be extremely long, unless we will target changed project in MSBuild - need to check if that works. Is it possible or we need to build whole project/solution each time?

And another generic question. I would like to know about some strategy with defining and working with the baseline in Git flow. If we have let’s say develop branch as a baseline, and a lot of PR coming in, shall we run sonarcloud steps after each merge or should we do it in regular basis? What is a best practise here?

Thanks for answer!

Hi @roofiq

I think 1 remains the best option : what you can do to avoid duplicate analysis of shared lib is to have one “master” pipeline that have all of them, and to exclude them from the others. It requires fine-grained tuning, but i think that’s worth the work.

For 2, i don’t see how you can achieve this, since you will be base on solution for the SonarCloud analysis to encapsulate every project, they will be analyzed anyway.

I would say that it depends on the way you are delivering / deploying the content of your develop branch.
Doing it after every merge will ensure that your baseline remains safe and properly green to go to production at every time., thus enabling possible continuous deployment if not already set up,
Doing it on a regular basis will probably save you a bit of computation time, but the drawback is that the feedback loop will be slower.