Intermittent SonarCloud Scan Failures in GitHub Actions - Random Errors & Unstable PR Decoration

ALM used: GitHub
CI system used: GitHub Actions
Scanner: SonarCloud GitHub Action (sonarsource/sonarcloud-github-action)
Languages: .NET, React (Typescript)
Project visibility: Private

Issue Summary

Since Friday evening, we have been experiencing intermittent and unpredictable failures in SonarCloud analysis across multiple repositories. The failures occur during GitHub Actions runs and appear to be unrelated to code changes, configuration updates, or PR metadata.

The same workflow may fail once, then succeed on retry without any modifications. In some cases, simply closing and reopening the PR temporarily resolves the issue, but not consistently.

This behavior is affecting every repository in our organization that uses SonarCloud, and we have not identified any consistent pattern.

Errors Observed

Below are the recurring errors we see in the logs:

ERROR: Something went wrong while trying to get the pullrequest with key '24'
The scanner engine did not complete successfully
Post-processing failed. Exit code: 1
Error: Process completed with exit code 1.

12:22:38.332 ERROR Something went wrong while trying to get the pullrequest with key ā€˜310’
12:22:38.659 INFO EXECUTION FAILURE
12:22:38.660 INFO Total time: 12.289s
Error: Action failed: The process ā€˜/opt/hostedtoolcache/sonar-scanner-cli/7.2.0.5079/linux-x64/bin/sonar-scanner’ failed with exit code 3

And in GitHub UI:

SonarCloud Code AnalysisExpected — Waiting for status to be reported

These failures appear randomly — sometimes the analysis completes successfully, other times it fails with the above messages.

Steps to Reproduce

Unfortunately, the issue is not reliably reproducible. However, the following conditions seem to trigger it intermittently:

  1. Open a pull request against the default branch.

  2. Allow GitHub Actions to run the SonarCloud analysis workflow.

  3. Observe that the scan may:

    • Fail with the above errors, or

    • Succeed without any changes on a subsequent retry.

  4. Closing and reopening the PR sometimes triggers a successful scan, but not consistently.

This behavior is identical across multiple repositories and workflows.

Potential Workarounds Tried

  • Re-running the failed GitHub Actions job → sometimes works, sometimes not.

  • Closing and reopening the PR → inconsistent results.

  • No changes made to workflow configuration or SonarCloud settings.

  • No changes made to repository permissions or GitHub organization settings.

At this point, we do not have a reliable workaround.

Request for Assistance

Given the randomness, the cross‑repository impact, and the timing (starting Friday evening), this appears to be an upstream or service‑level issue rather than a configuration problem on our side.

We would appreciate guidance on:

  • Whether this is a known SonarCloud service instability

  • Any recommended diagnostic steps

  • Whether additional logging can be enabled

  • How to ensure PR decoration and analysis remain stable

Thank you in advance for your support.

1 Like

Hi,

Welcome to the community and thanks for this report!

We’re not currently aware of anything on our side. You’re using GitHub Actions. Are your runners self-hosted or hosted by GH?

 
Ann

Hi Ann,

We are using self-hosted runners.

Looks like we are not alone.

We are experiencing the same issue! We are also using self-hoster runners and scans are failing totally randomly.
INFO Auto-configuring pull request 3508
ERROR Something went wrong while trying to get the pullrequest with key '3508'

1 Like

Hi @vg-anirudh-vasudevan and @Mikko_Kupsu,

Can you both share your organization Ids to help with the log-diving, please?

 
Thx,
Ann

Yesterday we started seeing errors in some of our GitHub Actions CI pipelines when performing Sonar checks.

The error in the Actions log is always something like ERROR Something went wrong while trying to get the pullrequest with key ā€˜[PR number]’. It’s intermittent with some actually succeeding after a few retries but it’s consistently failing in some others. There was no change to the pipelines that could lead to this error.

We have IP allowlist enabled in our GitHub organisation and the org is bound to the Sonar org through a GH App. The app has access to all repositories and we’re allowing the IPs to be managed by the App.

We found out yesterday that the IP range 3.253.125.212/30 hasn’t been added by the Sonar GitHub App, so we manually added it, but it didn’t make any difference.

Through the Sonar web UI, we see the error ā€œFailed to create GitHub access token… Got response 403 with message … IP address is not permitted to access this resourceā€, however all the IPs specified in this doc are already whitelisted in our GH org.

Could it be that a new set of IPs are now being used and it hasn’t been updated in the GH App and in the doc?

Hi,

Welcome to the community and thanks for this report!

Are you using self-hosted runners?

 
Thx,
Ann

Hi Ann,

Thank you!

No, we are using GitHub-hosted Larger Runners which provide static IP ranges so we can allowlist them in our GH org.

1 Like

Hi @ebg,

I’ve rolled your topic into this minutes-older one on the same theme for tidiness. Can you share your organization Id to help with the log-diving, please?

 
Thx,
Ann

My organization id is buddyhc

1 Like

Valgenesis is my Organization ID

1 Like

Hi all,

Thanks for the org Ids you’ve provided so far.

@vg-anirudh-vasudevan you noticed this starting Friday.
@Mikko_Kupsu you noticed this starting yesterday.
Please contradict me if I’m wrong.

@ebg when did you start noticing this? (And what’s your org Id? :face_blowing_a_kiss:)

I’m trying to gather as much data to focus the investigation as possible. :slight_smile:

 
Thx,
Ann

We started receiving reports of errors yesterday.

Org ID is bitvavo

1 Like

Yes, Ann. That is correct. We have been relying on pure luck and repeated retries since Friday.

1 Like

Hi all,

Thanks for your help so far. Can you add -Dsonar.verbose=true to your analysis command line(s) and see if you can get me an analysis log with the error?

Once you’ve got one, please post the log here, starting from the analysis command itself, and redacted as necessary.

 
Thx,
Ann

Hi again all,

Nevermind the request for debug logs. We think we’ve found the problem and should have a fix in place in a few minutes.

 
Ann

3 Likes

That is so refreshing to hear. Thanks Ann! Eagerly waiting for the fix.

1 Like

Hi again,

Okay, we think we fixed it.

So… can you ā€œintermittentlyā€ test? :zany_face:

 
Thx,
Ann

I retried a couple of them that were failing and they passed now.

I’ve asked the people who reported to retry and keep an eye. Will update here tomorrow if there are still issues.

Thank you!

1 Like