Liveness Probing in Kubernetes gives an Insufficient privileges error

Using the helm chart 1.2.3 and app version 9.2.3.

There the liveness URL will be with the context /api/system/liveness
This one result in {“errors”:[{“msg”:“Insufficient privileges”}]}

The Readiness URL is with the context /api/system/status
This one works and give a status {…“status”:“UP”}

Due to this sonarqube is in constant restarts. Changing the chart and pointing them to the same context as in the readiness one will help. But there must a reason why the Chart contains that context.

Anything I missed here to get it properly working, also with the context /api/system/liveness ?
Many Thanks.

2 Likes

Hi! I’m facing the exact same issue.

Kubernetes is restarting Sonarqube bc the liveness probe fails with 403…

1 Like

I was experiencing the same issue when using the official Helm chart.
However, I was able to solve the issue by adding a SONAR_WEB_SYSTEMPASSCODE environment variable to the deployment.yaml file:

env:
  ...
  - name: SONAR_WEB_SYSTEMPASSCODE
    valueFrom:
      secretKeyRef:
        name: {{ template "sonarqube.fullname" . }}-monitoring-passcode
        key: SONAR_WEB_SYSTEMPASSCODE

This variable was present in the StatefulSet deployment YAML file sonarqube-sts.yaml and for some reason left out from deployment.yaml.

2 Likes