We are currently using SonarCloud from Azure Devops in a couple of pipelines. We have it wired up to our pull request pipeline and also in a nightly pipeline against our main branch.
Due to the length of time the taint analysis takes in our pull request pipeline. We are looking to disable the following rules in that pipeline and only have them run overnight in our nightly pipeline against main branch.
We have configured a separate quality profile with those rules disabled however I can’t figure out if there’s any way to configure the SonarCloudPrepare@3 task to pass in an extra property that would either:
Run that specific Quality Profile, or alternatively
Ignore those specific rules from the pipeline task
Failing that I’m thinking we could potentially use the api to trigger which profile in SonarCloud is active in the project so during the day we make the quality profile without those rules active, then just before the evening run kicks off we update the quality profile to the one with the complete ruleset.
That said, it’s sad to hear that the performance isn’t up to snuff! I’d like to know what language(s) you’re analyzing, and how poor performance we’re talking here (5 minutes? an hour?)
Thanks for getting back to me. It’s a C# dotnet analysis that seems to take a long time. We’ve found that when we have those those taint analysis rules our PR scans take a minimum of an hour, but will occasionally jump up to 3 hours for a week or two at a time.
For example recently on Feb 6th all our SonarCloudAnalyze@3 tasks in the pull request pipeline jumped up to the 3+ hour mark and then on Feb 12th it dropped back down to the 60 minute mark which seems to be it’s average. Not sure if those dates coincides with any updates to SonarCloud but it’s not the first time it will jump up to unusable times for a week or so. If we remove those rules we seem to get around a 3 or 4 minute analysis time. My instinct is it’s potentially a memory issue that’s slowing it down. During the day we run our Pull Request pipelines on Microsoft-hosted agents (2 core CPU, 7 GB of RAM), but in the evenings for our run on main we can spin up a more powerful VM agent and run the scan once a night there.
If you could open up a private message I’d be happy to share them privately with you.
Unfortunately the 3+ hour analysis isn’t currently an issue right now as it resolved itself on Feb 12th and we didn’t have verbose logging enabled at the time. However I have some verbose logs from recent pull requests and non-verbose logs from a 3 hour run that I can share.
thank you for sharing the logs with us. The behavior indeed looks unexpected. I’ll contact you in a private message too to ask for some additional information that will hopefully help us debug this further.