We are experimenting with integrating sonarcloud in our build pipeline on azure devops. We are using private agents, and sonarcloud analysis is now enabled for some jobs in our build process. However, it seems that the registration for the msbuild integration for sonarcloud is not always cleaned up properly. We now have random build jobs failing because they run on an agent that has previously run a job that enabled the sonarcloud integration for msbuild. We are NOT running parallel builds on agents, so that is not the issue.
This is a real showstopper for us for using sonarcloud. We cannot simply use manual scanning instead, because this would cause us to los the ability to scan all projects that are actually built in the job that is running. Using inclusions and exclusions would be a very tedious job because of the amount of projects and their locations
Is there maybe something we can manually do to remove the global registration as an extra step in our build job to make sure it is actually removed.
Besides this, I think it is a really bad practice to make changes to the global agent configuration in a build job, exactly because of these unwanted side effects it might have to subsequent jobs on the agent
I can try, but this is a bit difficult. Even if this solves it, it would mean that we have to add the clean option to ALL pipelines in our organization, because it might be that the agent running the job has run a job using sonarcloud before. So it doesn’t really sound like a feasible solution
Fair enough. Can you check, once a pipeline has been executed, that you have/not have a SonarQube.Integration.ImportBefore.targets somewhere in the agent filesystem ? (If it’s a windows machine it could be also installed in either of the MSBuild directories)
I’ll re-enable the sonarcloud integration, but I have to warn the rest of our department about this, so they know what is happening if they encounter failing builds. I’ll see if I have time to check this tomorrow. Once I have results, I’ll check back here
Just and update to keep this warm. I’ve re-enabled the analysis and am now waiting for the issue to occur again, but of course, if you want it to happen it doesn’t happen, so it might take a while before I can answer your question
So it seems that sometimes cleaning up these targets fails. But I must stress again, adding these files to msbuild outside of the agent work directory is a very bad practice, and can lead to exactly this situation. What can we do to solve this? Because this is blocking us from actually using sonarcloud in our pipelines without breaking random builds
I’ve disabled the sonarcloud integration altogether for our projects, because this issue keeps happening and random builds are failing. Hopefully you have any input for us, otherwise we are going to move away from sonarcloud, since this is not worth all these failing builds
The issue is that the leftover files cause sonarcloud analysis to be executed during builds of unrelated build jobs. So any msbuild job running on the same agent after these leftover files are not removed generate a lot of sonarcloud warnings. Now this pollutes other builds, but wouldn’t be a major issue, were it not that most of our projects have “treat warnings as errors” enabled. This simply causing random builds to fail because sonarcloud analysis generates all kinds of warnings.
Could you tell us if you run the end step unconditionally and if this behaviour only starts after there was a failed build of a project that should have been analyzed? The end step should clean up some other targets and that way not intervene with another build.
Also could you please share with us what is the working directory where you run the commands from? Is it shared between the different projects you mention?
We use the sonarcloudprepare step in msbuild mode. This step seems to add the extra targets.
There is no mention of an end step here. I’ve checked before, and there doesn’t seem to be any failure on the pipeline job the ran before the jobs that start failing.
The working directory of the builds are determined by the pipeline being run. Each pipeline has their own workspace. But failing jobs are mostly running in the same workspace, because they are different jobs of the same pipeline. But this is not something we have influence on. This is defined by the azure devops build process.
Build the project (which analyses because of the msbuild integration)
Publish the results
then the job finishes, and the agent is available for other unrelated build jobs
then, a random unrelated other build job runs on the same agent, and every once in a while it turns out that the .target files are not removed from the msbuild folders during the job that uses the sonarcloud analysis, and the job fails because it starts a sonarcloud analysis during msbuild of the completely unrelated build job
The SonarQube.Integration.ImportBefore.targets you mentioned are harmless. The main thing they do is tell to import the SonarQube.Integration.targets file from the .sonarqube\bin\targets\ folder. This is the target file that causes the analysis to happen and in your case makes the build fail.
In the publish the results step, the SonarQube.Integration.targets should be removed from the .sonarqube\bin\targets\ folder and this way no subsequent build should be analyzed.
It might be that for some reason that file is not removed for you, causing this issue you are seeing.
Could you please run the Publish the results step with debug logs enabled, and check what it contains when you are having the problem?
The debug logs should contain the
Does this mean that when the “publish” step isn’t run, the target files are not removed? Because this will definitely happen if a build of the project fails. That might be the issue, but then I don’t understand why other people are not running into this. I don’t remember reading anywhere in the documentation I need to make sure that the publish ALWAYS runs, even when the build fails. Is it even possible to publish the results if the build fails? I mean, what results should it then publish?
FYI the SonarCloud/SonarQube Azure DevOps tasks are wrappers around the lower-level Sonar Scanner for .NET. The mapping between the Azure DevOps yaml task names and the scanner equivalents is as follows:
SonarQubePrepare/SonarCloudPrepare → begin
SonarQubeAnalyze/SonarCloudAnalyze → end
SonarQubePublish/SonarCloudPublish → no equivalent. This task waits until the SonarQube/SonarCloud server-side part of the processing has finished so it can post the results to the Azure DevOps build summary.
Almost - if the SonarCloudAnalyze step isn’t run then the second Sonar integration targets file that is in a pipeline-specific location won’t be deleted (there is more info in this old thread about what the two target files are for).
If you have Sonar verbose logging turned on you should see something like the following in the SonarCloudAnalyze logs:
So if a build step fails/skips the SonarCloudAnalyze but the pipeline continues regardless and builds other MSBuild projects/solutions, then those builds will pick up the targets and be analysed too.
Similarly, if you are using self-hosted build agents, future builds of the same pipeline will pick up the targets.
There are a number of potential workarounds:
use one of the Azure DevOps clean options to clean up the agent machine as @mickaelcaro suggested above (you would only need to set the clean options for the pipeline running the analysis, not every pipeline).
add a script step at the start of the pipeline to delete the .sonaqube folder and everything under it
There are a couple of other hackier workarounds involving the SonarQubeTargetsImported build property that are described in some forum posts, but I’d try the other two workarounds first.