Hi Team, I created sonarqube instance in kubernetes cluster using oteemo helm charts. I used the below chart,
I am getting below error in postgres, please advise
FATAL: password authentication failed for user “sonarUser”
DETAIL: Role “sonarUser” does not exist.
Connection matched pg_hba.conf line 1: “host all all 0.0.0.0/0 md5”
pg_hba.conf configurations are good to allow connections and in postgresql.conf listen_addresses = '*' has been set. I tried with both unencrypted password and encrypted password still same error exists.
Please suggest a fix. Your help is much appreciated!!
I tried using official chart from sonarsource with which also I am facing the same issue,
sonarqube-postgresql pod is giving below error,
[38;5;6mpostgresql e[38;5;5m08:47:26.61 e[0me[38;5;2mINFO e[0m ==> ** Starting PostgreSQL **
2022-02-28 08:47:26.636 GMT [1] LOG: pgaudit extension initialized
2022-02-28 08:47:26.636 GMT [1] LOG: listening on IPv4 address “0.0.0.0”, port 5432
2022-02-28 08:47:26.636 GMT [1] LOG: listening on IPv6 address “::”, port 5432
2022-02-28 08:47:26.639 GMT [1] LOG: listening on Unix socket “/tmp/.s.PGSQL.5432”
2022-02-28 08:47:26.663 GMT [90] LOG: database system was shut down at 2022-02-28 08:47:03 GMT
2022-02-28 08:47:26.787 GMT [1] LOG: database system is ready to accept connections
2022-02-28 08:47:35.969 GMT [103] FATAL: password authentication failed for user “sonarUser”
2022-02-28 08:47:35.969 GMT [103] DETAIL: Role “sonarUser” does not exist.
Connection matched pg_hba.conf line 1: “host all all 0.0.0.0/0 md5”
2022-02-28 08:47:37.462 GMT [104] LOG: incomplete startup packet
2022-02-28 08:47:45.939 GMT [111] FATAL: password authentication failed for user “sonarUser”
2022-02-28 08:47:45.939 GMT [111] DETAIL: Role “sonarUser” does not exist.
Connection matched pg_hba.conf line 1: “host all all 0.0.0.0/0 md5”
I checked the configuration of official chart installation, pg_hba.conf configurations are good to allow connections and in postgresql.conf listen_addresses = '*' has been set.
This looks like the initialization of the postgres user did not complete. can you share your values.yaml?
We are leveraging the bitnami postgresql chart as a dependency if you want to manage your database in k8s. The postgresql.postgresqlPassword value should be propagated to this value and initialize the postgresql database.
sadly i am not able to reproduce the described error and still think there is something wrong in the initialization of the postgresql pod.
This is what i did:
ttrabelsi@verdandi ~/Downloads > kubectl create ns test
namespace/test created
ttrabelsi@verdandi ~/Downloads > mv values.txt values.yaml
ttrabelsi@verdandi ~/Downloads > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "sonarqube" chart repository
Update Complete. ⎈Happy Helming!⎈
ttrabelsi@verdandi ~/Downloads > helm install -f values.yaml -n test sonarqube sonarqube/sonarqube
NAME: sonarqube
LAST DEPLOYED: Mon Feb 28 14:08:56 2022
NAMESPACE: test
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace test -l "app=sonarqube,release=sonarqube" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:9000 -n test
and then take a close look at the postgres pod
ttrabelsi@verdandi ~ > kubectl logs -f -n test sonarqube-postgresql-0
postgresql 13:09:09.09
postgresql 13:09:09.10 Welcome to the Bitnami postgresql container
postgresql 13:09:09.10 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 13:09:09.10 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 13:09:09.10
postgresql 13:09:09.11 INFO ==> ** Starting PostgreSQL setup **
postgresql 13:09:09.14 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 13:09:09.15 INFO ==> Loading custom pre-init scripts...
postgresql 13:09:09.15 INFO ==> Initializing PostgreSQL database...
postgresql 13:09:09.17 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 13:09:09.17 INFO ==> Generating local authentication configuration
postgresql 13:09:10.80 INFO ==> Starting PostgreSQL in background...
postgresql 13:09:11.24 INFO ==> Changing password of postgres
postgresql 13:09:11.25 INFO ==> Creating user sonarUser
postgresql 13:09:11.26 INFO ==> Granting access to "sonarUser" to the database "sonarDB"
postgresql 13:09:11.28 INFO ==> Setting ownership for the 'public' schema database "sonarDB" to "sonarUser"
postgresql 13:09:11.31 INFO ==> Configuring replication parameters
postgresql 13:09:11.33 INFO ==> Configuring fsync
postgresql 13:09:11.36 INFO ==> Loading custom scripts...
postgresql 13:09:11.36 INFO ==> Enabling remote connections
postgresql 13:09:11.38 INFO ==> Stopping PostgreSQL...
waiting for server to shut down.... done
server stopped
postgresql 13:09:11.48 INFO ==> ** PostgreSQL setup finished! **
postgresql 13:09:11.51 INFO ==> ** Starting PostgreSQL **
2022-02-28 13:09:11.523 GMT [1] LOG: pgaudit extension initialized
2022-02-28 13:09:11.523 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-02-28 13:09:11.523 GMT [1] LOG: listening on IPv6 address "::", port 5432
2022-02-28 13:09:11.525 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-02-28 13:09:11.539 GMT [153] LOG: database system was shut down at 2022-02-28 13:09:11 GMT
2022-02-28 13:09:11.543 GMT [1] LOG: database system is ready to accept connections
as you can see from the logs it should create the pg_hba.conf and sonarqube is consuming the values afterwards. i am missing the generation from the logs you provided so far.
May i suggest that you start over with your deployment, just so make sure that there was no typo or network error that is now causing issues and so you can check the initialization of the postgresql pod for errors.
I could see in my cluster postgres is getting installed with persisted data and missing ‘creating password of postgres, creating user sonarUser, granting access and setting ownership’ steps
you uninstall the helm release (helm uninstall <release name> -n <namespace>) and delete the PVC of the postgresql sts manually afterwards. after this is done there should be nothing left from the previous installation and you can start with a clear field