We have been aching to implement something like SonarQube / SonarCloud and we finally were able to start this project up. I have my two projects up and running, but I’m looking for some external opinion what the correct setup would be for our company.
We have build a large platform that is comprised of multiple domain services. We also have quite a big “shared” piece of code between our services. Everything is maintained in a single C# solution, but we have multiple buildable projects. Each project has it’s own build pipeline in where we build the project to a docker image and deploy it on azure.
The reason we did it like is is because our team is to small for separate solutions because of the overhead it would take. So for now we took this hybrid like approach.
root/src/Data// <== Logic for main DB
root/src/Domains// <== Specific domain project, that can be used in multiple web endpoints
root/src/Services/ <== Shared service like csv import, gps conversion, etc.
root/src/Shared// <== Shared infrastructure like servicebus and abstract classes
root/src/Web// <== Actual endpoints for our platform that are converted into docker images
How would you setup SonarCloud in our situation?
We now have two of our in SonarCloud using a MonoRepo strategy (Monorepo Support | SonarCloud Docs), however I see that it still analyses all the duplicate code. This means that with over 20 endpoints our pricing would increase dramatically.
I have thought of creating a seperate analyses pipeline to build the entire solution at once and as such it would only analyse all the code once, but this means that our analysis would run be global instead of per project so that the responsible developer can handle it.
- ALM used: Azure DevOps
- CI system used: Azure DevOps