error observed (wrap logs/code around triple quote ``` for proper formatting)
Application pod ends up in crash loop.
Problem
SonarQube won’t start completely up so the Kubernetes probe fails and Kubernetes kills the process.
Reason
SQ’s “web” process is in “safe mode” and reports status RED to livenessProbe.
“web” is in “safe mode” because the database needs upgrade for the new SQ version. This is quite normal, you do it by going to SQ_URL/setup and let it upgrade. But web stays in safe mode until upgrade is complete.
Kubernetes does not mark the pod as “Running” and “Healthy” because livenessProbe fails. Pod is soon restarted and the loop goes on.
Workaround
The livenessProbe is defined in templates/sonarqube-sts.yaml. On line 302 it accepts health: GREEN or health: YELLOW responses. Modify this to accept also health: RED.
Install:
# The folder sonarqube-lts-1.0.23+179 contains the modified chart.
$ helm3 -n sonarqube install sonarqube sonarqube-lts-1.0.23+179 -f values.yaml
Now the probe passes and Kubernetes lets traffic go to the pod. Browse to ../setup, upgrade the database, redeploy with unmodified sonarqube/sonarqube-lts chart.
Hello @apa64
thanks for your feedback, and for sharing a work around!
As you’ve seen with the LTS helm chart definition, the pod Liveness is directly mapped to the api/system/health SonarQube API endpoint, which provides a slightly different information:
That’s the case only for the LTS chart though. With SONAR-15239 a liveness endpoint was added to SonarQube 9.1+ API. The 9.x helmchart relies on this new endpoint and should not suffer from any such upgrade problem.
Warning: the last step of the work around described by Antii is to come back to the original chart, this is important as the work around practically disables the livenessProbe
Alternative: Another work around would be to apply the upgrade before the move to K8S.