Must-share information (formatted with Markdown):
- which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
- what are you trying to achieve
- what have you tried so far to achieve this
Hello folks - I am trying to install SonarQube Community Edition in a kubenetes v1.24 cluster using Deploy a SonarQube Cluster on Kubernetes | SonarQube Docs The only update i made to the default values.yaml was setting volumePermissions.enabled = true to allow the sonarqube-postgresql-0 pod access to the PVC mount. When this pod initially starts, it outputs up to the lines below, then the pod terminates and restarts.
INFO ==> Initializing PostgreSQL database…
INFO ==> pg_hba.conf file not detected. Generating it…
INFO ==> Generating local authentication configuration
On restart the pod seems to complete Postgresql startup but does not create the sonarUser, thus nothing can log into the DB. I’m looking for any hints on how to get the initialization to create the user so that the DB can be logged into.
I have retried from scratch by deleting the helm chart and PVC but get the same results - any assistance would be appreciated.
thanks for your post, very interesting topic. Given that you are trying to deploy a community edition, you are probably following Deploy SonarQube on Kubernetes | SonarQube Docs (if not, please follow these steps, as the other page you mentioned relates to the data-center edition).
To help us replicate the issue, currently how do you mount the volume to sonarqube-postgresql-0? Can you share the k8s manifest files?
Hi Carmine - thanks for the response and you are correct, i posted the wrong link and i have been following the steps in the link you provided.
Any suggestions for figuring this out would be greatly appreciated.
The PVC is handled by IBM K8s (IKS) service using an NFS mount.
kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sonarqube data-sonarqube-postgresql-0 Bound pvc-4d364f93-fa5c-4096-a0e0-8d3e050be5a8 20Gi RWO ibmc-file-gold 21h
kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4d364f93-fa5c-4096-a0e0-8d3e050be5a8 20Gi RWO Delete Bound sonarqube/data-sonarqube-postgresql-0 ibmc-file-gold 21h
I think the mount is working ok, once I set “volumePermissions.enabled = true” in the helm values overrides (prior to this postgresql pod failed with permissions error getting to the mount).
Here is the statefulset.apps/sonarqube-postgresql:
kubectl describe statefulset.apps/sonarqube-postgresql -n sonarqube
CreationTimestamp: Thu, 29 Sep 2022 14:47:59 -0500
Annotations: meta.helm.sh/release-name: sonarqube
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
chown 1001:1001 /bitnami/postgresql
mkdir -p /bitnami/postgresql/data
chmod 700 /bitnami/postgresql/data
find /bitnami/postgresql -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \
xargs chown -R 1001:1001
/bitnami/postgresql from data (rw)
/dev/shm from dshm (rw)
Host Port: 0/TCP
Liveness: exec [/bin/sh -c exec pg_isready -U “sonarUser” -d “dbname=sonarDB” -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [/bin/sh -c -e exec pg_isready -U “sonarUser” -d “dbname=sonarDB” -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
] delay=5s timeout=5s period=10s #success=1 #failure=6
POSTGRES_POSTGRES_PASSWORD: <set to the key ‘postgresql-postgres-password’ in secret ‘sonarqube-postgresql’> Optional: false
POSTGRES_PASSWORD: <set to the key ‘postgresql-password’ in secret ‘sonarqube-postgresql’> Optional: false
/bitnami/postgresql from data (rw)
/dev/shm from dshm (rw)
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Access Modes: [ReadWriteOnce]
The postgresql pod is running (after the 1 restart i mentioned above) but the Liveness probe fails with:
2022-09-29 20:18:53.883 GMT  FATAL: password authentication failed for user “sonarUser”
2022-09-29 20:18:53.883 GMT  DETAIL: Role “sonarUser” does not exist.
Connection matched pg_hba.conf line 1: “host all all 0.0.0.0/0 md5”
kubectl get pod -n sonarqube
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-0 1/1 Running 1 (22h ago) 22h
sonarqube-sonarqube-0 0/1 Running 247 (5m4s ago) 22h
thanks for the additional info you provided to us. Unfortunately, we were not able to replicate your issue. Specifically, we tried to restart the postgres stateful set and both sonarqube and common postgres db client can authenticate and query the db with the previous credentials.
A few observations from our side:
- In your case, the postgres pod gets restarted when installing the chart. This is a quite unusual behavior as the stateful set should be applied without issues.
- At the moment we do not support any specific cloud provider. It might be that this is an issue induced by missing IKS compatibility, although we doubt this a bit. The issues might be simply caused by the fact that when applying the postgres chart some errors leave the db files in the corrupted state and at the restart the db cannot be accessed anymore with the same credentials. This might be motivated by the following lines from the official postgres image doc:
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
What do we suggest you to do next?
We are talking about values (e.g.,
postgresql.volumePermissions.enabled), that are set in the postgres bitnami chart and the issue that you have seem to be related to that chart only. Could you try to install this chart alone and try to replicate the issue you have now?
Generally speaking, the postgres chart should used for testing purposes only (e.g., when you wanna try the chart first). For production environments, ideally you should decouple postgres deployment from the sonarqube one (i.e., you can inject sonar properties in our chart to connect to it). Above all, we highly recommend you to take this direction.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.