using mcr.microsoft.com/dotnet/sdk:5.0-alpine3.12 as an build image base
azure devops pipelines to execute the build
SonarQubePrepare/Analyze/Publish tasks are available
docker build to compile app inside the container
Assuming that the build is being done inside the docker container, using specific environment and sdk version, I would like to ensure that exact the same are used during the sonarqube scan.
I’m just afraid that if I execute the scan directly on a build agent, it could give me a false-positives or false-negatives just because different sdk and system was used.
So far I couldn’t find any way to run the sonarqube on my sources inside the build container and I’m wondering if it’s even needed…
3. what have you tried so far to achieve this
Totally first approach was to just checkout sources to the build agent and execute the scan on it directly on pipeline - but this is exactly what I want to avoid (or maybe not? see last question in this message).
Then I tried to copy the sources and artifacts from the container to the outside but this didn’t work at all as the sonar needs to be plugged-in during the msbuild compilation time.
Searching the web for a sonarqube and docker the results refers to the scanner that is a docker container rather than scanning the code that is being build inside the container.
Is there any way of achieving my goal that I’m missing?
or Should I even care?
I learned that sonarQube for .net plugs into msbuild.exe, which in fact is being taken from the sdk folder, but still - if the source code is the same, can there be different findings or reports if different version of sdk is being used to compile the app?
It is possible to run SonarScanner for .NET directly on your Docker Container during the build, just use the commands described here in your dockerfile. I don’t think SonarQubePrepare/Analyze/Publish tasks are needed in this situation. The downside of this approach is that you will also need to install the Java prerequisite on your container which you may find undesirable! I am currently exploring ways to improve this experience.
Should you even care? This is an interesting question - I think it is very unlikely that you will see any differences in the results as long as the build parameters are the same. Although you are effectively building twice, your original approach has the advantage that you can run the creation of the docker container and the analysis in parallel which may reduce your overall build time.
I hope that is helpful, please let me know if you have any further questions.
Hi Tom!
This is in fact what I’m doing currently (great minds think alike )!
I can say a bit more about my integration with Azure Pipelines with this approach:
first I execute SonarQubePrepare on agent, that sets the SONARQUBE_SCANNER_PARAMS variable with all the params required (like branch name, pull request variables, authorization from Azure’ Service Connection etc.).
I create the ‘intermediate’ docker image with all the sonarQube tooling (including Java11)
(having the sonarqube scan execution stored as a script) I pass the SONARQUBE_SCANNER_PARAMS to docker run under exact same name, so dotnet sonarscanner begin and dotnet scanner end processes pick up most of the params automatically from this environment variable
I capture the report-task.txt and copy it from the docker to the location on the build agent where the SONARQUBE_SCANNER_PARAMS defines the file to be, thanks to that…
… SonarQubePublish picks it and publishes the result, so I can have the QualityGate integration with the pipeline and pull request
This is most elegant solution I could find so far.
More than the performance/time I care of is the true result of the scan As I said, I was more afraid seeing that different sdk and different msbuild could be used for the original build and for the scan.
Hope it will help somebody in the future!
I’m also open for improving the way I configured that.