Sonarqube not starting on Kubernetes after SCM token expired

Hi,

After restarting our Sonarqube (v10.2) deployment on K8s, it won’t start up again with the following error logged:

2023.11.02 07:49:03 INFO  web[][o.s.s.e.IndexerStartupTask] Indexing of type [rules/rule/activeRule]...
2023.11.02 07:49:10 INFO  web[][o.s.s.e.IndexerStartupTask] Indexing of type [rules/rule/activeRule] done | time=6351ms
2023.11.02 07:49:10 INFO  web[][o.s.s.e.IndexerStartupTask] Indexing of type [rules/rule]...
2023.11.02 07:49:13 ERROR web[][o.s.a.c.g.GitlabHttpClient] Gitlab API call to [https:/<REDACTED>/api/v4/user] failed with 401 http code. gitlab response content : [{"error":"invalid_token","error_description":"Token is expired. You can either do re-authorization or token refresh."}]
2023.11.02 07:49:19 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2023.11.02 07:49:19 INFO  app[][o.s.a.SchedulerImpl] Sonarqube has been requested to stop
2023.11.02 07:49:19 INFO  app[][o.s.a.SchedulerImpl] Stopping [Compute Engine] process...
2023.11.02 07:49:19 INFO  app[][o.s.a.SchedulerImpl] Stopping [Web Server] process...
2023.11.02 07:49:19 INFO  web[][o.s.p.ProcessEntryPoint] Gracefully stopping process
2023.11.02 07:49:19 INFO  web[][o.s.s.e.CoreExtensionStopper] Stopping Governance
2023.11.02 07:49:19 INFO  web[][o.s.s.e.CoreExtensionStopper] Stopping Governance (done) | time=0ms
2023.11.02 07:49:19 INFO  web[][o.s.s.n.NotificationDaemon] Notification service stopped``

This is a test instance and it seems our Gitlab token expired. Since this is set via web UI/API (requiring running instance that is), how to get past this error and start up Sonarqube again?

Thanks in advance,
Marko

Hey there.

I’ve tried a couple of ways but I cannot reproduce this!

The token is stored (in cleartext) in the alm_settings table – it’s a pretty safe update to make if you know what you’re doing (and because this is a test instance). If you need help, let me know what database provider you use.

I’d like to know if it actually solves the startup issue.

Is it possible that Kubernetes is detecting the ERROR and stopping on its own? It looks like a graceful stop not triggered by the Web process, which is what I would expect if this was really causing the issue.

Hey Colin,

Will try to update the token in the table to see if that fixes it. I haven’t found anything else in the logs to point to other problems but will double-check after updating table…

Hey Colin,

Just to update you on the matter, your hunch was correct - the K8s cluster (running on Azure) did not scale properly based on chart’s default requests and limits - had to bump default requests value form 2 to 3GB in the chart to actually make it spin up a node and stop evicting the pod ad infinitum.
Sorry for the hassle, the log led me in the wrong direction, especially with how it was running until token expiry and the load hasn’t changed in the meantime.

3 Likes

No problem. Thanks for the follow-up. :slight_smile:

1 Like