Slow execution times

In the past week or so we’ve seen the execution time (executionTimeMs) increase significantly across our organization.

Our retry logic in our Jenkinsfiles never picks up the ‘OK’ status. I had thought that waitForQualityGate() used a webhook so retry logic wasn’t needed?

So, I guess I have two questions:
Did something change on Sonar’s end that increased executionTimeMs?

and

What’s the recommended way to implement waiting for Sonar results in a Jenkinsfile?

Thanks!

Bump?

We’re not aware of slower execution times at SonarCloud side. Quite the contrary, over recent weeks we’ve rolled out some improvements to reduce our processing times on average. We had an incident on the day before your post, which caused an outage for a brief period.

Another element of the overall execution time is taken by the scanners, which run in the CI environment, not on SonarCloud. The duration of these depend on the language analyzers, which are upgraded quite often, and could also contribute to variation in the total time.

You can find the execution time per language analyzer in the scanner output, on lines following the format:

Sensor XYZSensor [XYZ] (done) | time=Xms

What is the executionTimeMs you’re referring to? I’d like to understand if it’s something that includes overall processing from start of scanner to completed report, or only SonarCloud site or only scanner side. Also, please let us know the rough percentage of languages a project where you experience slowness.

This documentation looks good to me (and it seems you’re already following it): SonarQube Scanner for Jenkins