Helm chart version: 8.0.0
SonarQube is deployed in GKE (Google Kubernetes) behind a proxy (IAP) and communicates with our self-hosted GitLab deployment using an http kubernetes service URL to avoid needing to authenticate with the IAP proxy.
We have configured the GitLab devops integration using the kubernetes service URL (internal to the cluster) and every project in our deployment is configured from a corresponding project in GitLab.
Decorate GitLab MRs with analysis results after running SonarScanner in a GitLab CI pipeline.
Most of the time, there is no issue and everything works fine, however, sometimes sonar scanner tries decorating an MR using the public URL for the gitlab server. Since it cannot authenticate with the IAP proxy over the public URL, this fails.
2023.04.05 17:51:07 INFO ce[AYdSiyhEZl0SwYh0tgbM][c.s.F.D.F.B] GitLab's instance URL from the scanner ('https://gitlab.freenome.net/api/v4') is overridden by the settings ('http://gitlab-webservice-default.gitlab.svc:8080/api/v4')
2023.04.05 17:51:07 INFO ce[AYdSiyhEZl0SwYh0tgbM][c.s.F.D.F.B] GitLab's project ID from the scanner ('214') is overridden by the settings ('214')
2023.04.05 17:51:07 ERROR ce[AYdSiyhEZl0SwYh0tgbM][c.s.F.D.F.A] An exception was thrown during Merge Request decoration : unexpected end of stream on http://gitlab.freenome.net:443/...
2023.04.05 17:51:07 ERROR ce[AYdSiyhEZl0SwYh0tgbM][o.s.c.t.p.a.p.PostProjectAnalysisTasksExecutor] Execution of task class com.sonarsource.F.D.d failed
java.lang.IllegalStateException: unexpected end of stream on http://gitlab.freenome.net:443/...
Caused by: java.io.EOFException: \n not found: limit=0 content=…
... 46 common frames omitted
As you can see, from the logs it says that the settings are overridden and it should be using the kube service URL, however, the error says it’s using the public URL.
What I’ve tried
Removing the GitLab CI environment variables that point to the public GitLab URL
Modifying the GitLab CI environment variables to point to the non-public GitLab URL
All projects in our GitLab instance are managed/created via terraform IaC. We likewise are also creating our corresponding SonarQube projects as well as configuring the GitLab integration for each project with terraform.
On an affected PR, can you look at the background task (Project Settings > Background Tasks) and open up the Scanner Context, and see if you find the unexpected values for sonar.pullrequest.gitlab.instanceUrl (https://gitlab.freenome.net/api/v4)
It should give some context about where they are coming from too: scanner, server, project…
Hey @Colin I just checked on this and I do not see any sonar.pullrequest.* values anywhere in the context. The only reference to gitlab at all was the sonar.core.serverBaseURL(https://sonarqube-gitlab.freenome.net)
@Colin yeah I’m positive. I’ve just double checked on a different task for a different project. The last analysis came from a GitLab MR pipeline. The analysis succeeded, but no MR decoration occurred and there are no sonar.pullrequest properties in the Scanner Context. The analysis in SonarQube has the following warning as well:
Yeah I already know that the CI_API_V4_URL points to the IAP URL. In the “What I’ve Tried” section I mentioned how I overwrote all the GitLab CI variables in the pipeline to point to the kubernetes service URL rather than the IAP URL.
Since then we have upgraded to SonarQube v10.0 (on May 9th). My apologies for not mentioning that earlier, that probably would have been useful for you to know . I haven’t retried my test of rewriting the environment variables in the pipeline, but I could if you think that might be worthwhile?
If you’re on SonarQube v10.0, then we can scratch out anything related to CI_API_V4_URL because this should no longer be used anywhere.
I’m starting to grasp at straws here, but here are a few things to consider:
This feels vaguely suspicious – like if there is another terraform provider (perhaps that is supposed to be targeting a test instance or something like that) that is somehow adjusting this setting before being snapped back into place by the “real” one. If you are able to check the integration configuration right after a merge request decoration fails, it might point to this/rule it out.
After that, I start to look away from the SonarQube configuration and consider if something is happening when the request is made that points to the wrong server (DNS caching, for example). You could try to set the DNS cache to… not cache, and see if it makes a difference.
This is highly unlikely, given how our terraform system is deployed where test/prod are completely isolated from each other (separate k8s clusters, contexts, gitlab/sonarqube user accounts, etc). It would be nearly impossible for a config to be pointed at the wrong instance.
To be safe, I did walk through this exercise to ensure we hadn’t overlooked anything.
This is interesting…I’ll have to dig more to see if DNS could be the problem