Azure DevOps intermittent issues reaching Sonarcloud.io

Hello, We’ve seen similar threads in the community regarding intermittent issues using the SonarcloudPrepare and SonarcloudPublish on Azure Devops runners.

Our experience is that since October 2025 the tasks have been unstable and almost unusable due to a high failure rate.

SonarCloudPrepare Task:

##[error][ERROR] SonarQube Cloud: Error while executing task Prepare: API GET '/api/server/version' failed. Error message: Error.
##[debug]Processed: ##vso[task.issue type=error;source=TaskInternal;][ERROR] SonarQube Cloud: Error while executing task Prepare: API GET '/api/server/version' failed. Error message: Error.
##[debug]task result: Failed
##[error]API GET '/api/server/version' failed. Error message: Error.

SonarCloudPublish Task:

##[error][ERROR] SonarQube Cloud: API GET '/api/metrics/search' failed. Error message: .
##[debug]Processed: ##vso[task.issue type=error;source=TaskInternal;][ERROR] SonarQube Cloud: API GET '/api/metrics/search' failed. Error message: .
##[error][ERROR] SonarQube Cloud: Error while executing task Publish: Could not fetch metrics
##[debug]Processed: ##vso[task.issue type=error;source=TaskInternal;][ERROR] SonarQube Cloud: Error while executing task Publish: Could not fetch metrics

Mitigation - Enabled up to 10 retries over 7 minutes to allow the task to reach Sonarcloud.io to get the API version.

Investigation - We added logging to our DNS server incase we were failing to resolve your domain internally. We were able to resolve it each time.

Additionally, It was suggested in an earlier thread to curl endpoint outside of the task but within the same pipeline runner. We added it last month and we’ve never had a failure on it.

Curling - /api/server/version

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    11  100    11    0     0     10      0  0:00:01  0:00:01 --:--:--    10
100    11  100    11    0     0     10      0  0:00:01  0:00:01 --:--:--    10
8.0.0.78543

Curling - /api/metrics/search, It is a much larger response but the following example gets the idea across.

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{"metrics":[{"id":"419","key":"accepted_issues","type":"INT","name":"Accepted Issues","description":"Accepted issues","domain":"Issues","direction":-1,"qualitative":false,"hidden":false},{"id":"289","key":"new_technical_debt","type":"WORK_DUR","name":"Added Technical Debt","description":"Added technical debt","domain":"Maintainability","direction":-1,"qualitative":true,"hidden":false}
100 20125    0 20125    0     0  18431      0 --:--:--  0:00:01 --:--:-- 18446

We’ve also tried rolling back to earlier versions of your Azure DevOps extension and we’re left with the same issues.

At the moment, We’ve disabled Sonarcloud until we can get the situation resolved. We’ve done everything possible to identify any configuration issues internally.

We are happy to provide any more information or try any further suggestions. Please let us know.

Thank you,
Have a good day.

We are experiencing the same issue as well; we even add a health check step using curl which is always succeed but the prepare task still failed.

This is hugely impact us and we have to skip Sonar scan for our projects

Hi,

We’ve just released a new version of the extension with more logs and an upgraded version of the Axios library. Can you make sure you’re using v4.0.3 of the extension/task and see if you still get an error?

If so, please enable debug logging and give us the logs.

 
Thx,
Ann

Hey, Excellent! I did have a reply but the post below has more information!

Thanks!

Hi Ann,

I updated to the lasted version this morning, but still failed in the publish stage. Detailed log attached

log.txt (6.3 KB)

2 Likes

Sorry to hear that. About the link you shared, I’m seeing this:

<Error>
  <Code>AccessDenied</Code>
  <Message>Access Denied</Message>
  <RequestId>R711QABVH979BZ8Y</RequestId>
  <HostId>Mt55v/cQTZ+wSf+lUKB8HMXItKIedRs9ukr313LAacHyh6RitZCUI3TE0qwkriu4bWigOdkkL7FMgI70+caPp7+bwvmix93+</HostId>
</Error>
1 Like

Hi,

We’ve added yet more logging in version 4.0.4. :sweat_smile:

Could you all make sure you’re on that and give us the logs?

 
Thx,
Ann

Hello,

Please find the following logs attached… Let us know if you want anything further.

Included is a failure and a successful request, Tasks are all using 4.0.4.

logs.txt (31.7 KB)

Thank you,
Have a good day

1 Like

@Elmo,

Thank you so much for these logs on the new version. It looks to me like the TLS handshake doesn’t work, sometimes. But I’m not a dev – I’ll convey the file to dev.

Best, Wayne

1 Like

@Elmo,

Our current hypothesis is that the agent is trying – and intermttently failing – to connect to our Frankfurt endpoint from a network that might be:

  1. High-latency;
  2. Geographically distant; e.g., Sydney.

As such, we’d now like to verify if the following Node DNS options are helpful:

# Prefer IPv4 addresses

--dns-result-order=ipv4first

Ref: –dns-result-orderorder=order

# Double the default [network family autoselection] timeout
 
--network-family-autoselection-attempt-timeout=500

Ref: –network-family-autoselection-attempt-timeout

In an Azure Pipeline, NODE_OPTIONS can be defined like so:

variables:
  NODE_OPTIONS: '--dns-result-order=ipv4first --network-family-autoselection-attempt-timeout=500'
  ..

trigger:
  ..

steps:
  ..

Would you like to try these out in one of your pipelines? If it is verified good, I think it could be useful to define it at the OS itself, and so address the issue for all jobs that the agent tries to pick up.

What do you think? Best, Wayne

3 Likes

@Elmo,

Further, we want to ensure that your network permits outbound 443 for both IPv4 and IPv6; e.g., mine:

$ curl -v -6 https://sonarcloud.io/api/server/version
* Host sonarcloud.io:443 was resolved.
* IPv6: ::ffff:3.170.229.111, ::ffff:3.170.229.113, ::ffff:3.170.229.2, ::ffff:3.170.229.109
..
* Connected to sonarcloud.io (::ffff:3.170.229.111) port 443
8.0.0.79730

If you do not see the Cloud version number, then something [else] to check/fix on your end, then.

Best, Wayne

Hey Wayne!

That’ll do it. We followed all recommendations and can confirm it is working as expected.

Thank you very much for the assist!
Have a fantastic day!

3 Likes

Hello @Elmo,

I’m very glad to read this; I’ll convey your regards to dev as well! :slight_smile:

Best, Wayne

2 Likes