TIP: SonarQube Helm-Based Deployments

So, we experienced an issue with deploying a helm-based update wherein the basic helm deployment went fine. However, during the /setup portion, we saw a consistent container crash at a specific area of the schema update.

We weren’t sure, but thought we might have some sort of corruption or something…but it happened to be resulting from the general liveness probe settings.

Using,

kubectl logs -p -n $namespace $container_id_or_name
kubectl describe -n $namespace $container_id_or_name

helped a bit on the troubleshooting. We knew that the liveness probes were probably causing the issue, but we erroneously changed the timeoutSeconds below and that didn’t help…

By default, the liveness probes are set for the follow–which are FINE normally.

livenessProbe:
  initialDelaySeconds: 60
  periodSeconds: 30
  failureThreshold: 6
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

What did the trick was to temporarily increase the initialDelaySeconds to 300 and periodSeconds to 180 during the /setup step.

We applied yaml to the SonarQube namespace deployment similar to below (only changes shown):

    spec:
      containers:
      ...
        livenessProbe:
        ...
          initialDelaySeconds: 300
          periodSeconds: 180

Then we went to the /setup on the new update/installation. When completed, we updated the pod liveness probe back to:

    spec:
      containers:
      ...
        livenessProbe:
        ...
          initialDelaySeconds: 60
          periodSeconds: 30

for normal operations.

Hope this helps someone else!!

2 Likes

Dear @kirkpabk,

Great to see you posting about possible troubleshooting guidelines! We believe this is indeed valuable for other users :slight_smile:

By default, the liveness probes are set for the follow–which are FINE normally.
[…]
What did the trick was to temporarily increase the initialDelaySeconds to 300 and periodSeconds to 180 during the /setup step.

May I ask you why you increased the livenessProbe timeouts instead of acting on the startupProbe? I tend to see the latter as a better suggestion to cope with with a long /setup phase like yours, especially in order to avoid people forgetting about changing back the livenessProbe values that might have a bigger (negative) impact compared to forgetting others… WDYT?

1 Like