We are currently running version 10.7(I know is no longer active and we are pending to IT service to update the version).
We have a series of pipelines that execute several stages in Jenkins. In one of these stages, we run two Sonar analyses: one global for the develop branch and one for a pull request. For the latter, we make a curl --request GET call to the Sonar API to retrieve the issues and hotspots. When it detects issues, the developers fix them, and upon committing, the pipeline is re-triggered.
When the pipeline is relaunched, the analyses are run again too, so we can obtain the results in a updated manner. Between the analyses and the API calls, there is a pipeline pause of around 2 seconds to allow the analysis to settle, and beforehand, an API call is made to check if the token is valid. Afterwards, we perform the GET requests for issues and hotspots.
The problem arises when, on the server, these issues have been fixed, but the results returned by the API call do not reflect the corrections. What could be causing this?
2 seconds seems like an awfully short time. You can check how long your background tasks typically take to process by checking the background tasks.
You can also add sonar.qualitygate.wait=true as an analysis parameter to not allow the scan process to finish until the background task processing is completed, but this will also fail your pipelines if your Quality Gate has failed.
Thank you for the suggestion.
What advantages and disadvantages do you see between using sonar.qualitygate.wait=true versus manually querying with the ceTaskId and then checking the Quality Gate status separately?
I am interested in understanding if there are scenarios where one option is more robust or flexible than the other.
Using sonar.qualitygate.wait will always cause the build to fail if the Quality Gate does not pass. However, by querying the SonarQube API directly, you can implement more sophisticated logic to determine how to handle Quality Gate failures in your build!