Sonarqube Helm chart deployment (enterprise edition)

Must-share information (formatted with Markdown):

  • which versions are you using - latest charts (10.6.0 according to the github)(SonarQube, Scanner, Plugin, and any relevant extension)
  • how is SonarQube deployed: Helm
  • what are you trying to achieve - my goal is to install sonarqube enterprise edition on an aws cluster using helm
  • what have you tried so far to achieve this. I have used the helm chart connecting to an rds database. i get perpetually pending pods when persistence is enabled. also tried to deploy the chart as is and that also gets a pending pod for the PostgreSQL which I think holds up everything else

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

running kubectl logs sonarqube-sonarqube-0 -n sonarqube gives me
“Defaulted container “sonarqube” out of: sonarqube, init-sysctl (init)”
running kubectl logs sonarqube-sonarqube-0 init-sysctl -n sonarqube gives me nothing

Sonarqube Helm chart for community support · GitHub This helm chart(persistence disabled) works i.e the pod is running and ready, but i can’t reach the web server on the pod and instance IP. Ports are open at 9000

Hello @Sooter_Saalu thanks a lot for participating in the community.

Could I ask you if you’ve used any ingress/ingress controllers? what access method are you using to access the app?

Thank you for responding. I didn’t set up an ingress. Tried port-forwarding and using the pods public ip

If the pods are healthy and up and running, port forwarding and access on localhost:9000 should work. Please make sure you don’t have SonarWebContext set; otherwise, you need to suffix that to the URL.

the persistency is optional and slightly improves performances, if you cluster does not have proper storage class this might be why the pods keeps being in the pending state.

Could you share a bit more info, like kubectl get pods and the error you get when accessing through port forwarding?

thank you. there’s the default gp2. should I create another one. what would be the required/proper settings?

“Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={“apiVersion”:“storage.k8s.io/v1”,“kind”:“StorageClass”,“metadata”:{“annotations”:{“storageclass.kubernetes.io/is-default-class”:“true”},“name”:“gp2”},“parameters”:{“fsType”:“ext4”,“type”:“gp2”},“provisioner”:“kubernetes.io/aws-ebs",“volumeBindingMode”:"WaitForFirstConsumer”}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion:
MountOptions:
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: ”

Web context is left empty. kubectl get pods -n sonarqube shows the pod running. added a screenshot. The error when accessing the pod IP is “cant reach, took too long to respond”. security group rules are open to 9000.

Your storage class is marked as default; nothing else should be required; we will troubleshoot this afterward.

With the Pod IP it makes sense that you cannot access it as it will not be exposed outside of the cluster unless you specify NodePort or LoadBalancer in the chart service type, which is not recommended for production as it would expose directly sonarqube without a reverse proxy, but this might help for debugging.

Else, I see the port-forwarding seems to work, at least it does not close automatically.

What happens when you curl localhost:8080 from the exact same machine/cloudShell you are running the port-forward?

Else you can try the full loop and install an ingress controller plus activate SonarQube helm chart ingress.

Is this a resource you are familiar with?

curling 8080 from the cloudshell goes through.

<body>
    <div id="content" data-base-url="" data-server-status="UP" data-instance="SonarQube" data-official="true">
        <div class="global-loading">
            <i class="global-loading-spinner"></i>
            <span aria-live="polite" class="global-loading-text">Loading...</span>
        </div>

for the ingress - so first deploying the “ingress-nginx” controller then editing the sonarqube values.yaml

ingress-nginx: 
 enabled: true

ingress:
 enabled: true
 hosts:
   name:# can I use an associated Public IPv4 DNS?
... 
annotations 

nice to hear that!

If you are not familiar with ingresses, I suggest you extensively read the concept and documentation; this is the critical software that will expose your application and can cause many security issues.

The workflow is indeed to deploy the ingress-nginx, which will create a loadBalancer kubernetes service. You’ll then have to properly configure your SonarQube DNS name to point to this loadbalancer IP, and then create the ingress and put the DNS name in the host section of the ingress.

1 Like

@jeremy.cotineau Thank you very much for that.
Do you have any ideas on debugging the persistence

Sure, you can activate it back, and when the pod will be pending, there are multiple commands to see what happens.

kubectl get events
kubectl describe pod/deployment

Those two commands with the name of the concerned object, should tell you why the pod is pending. If you do not have anything here, then you should take a look at the logs of your CSI controller.

@jeremy.cotineau
describe

Name:             sonarqube-sonarqube-0
Namespace:        sonarqube
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=sonarqube
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=sonarqube-sonarqube-6678fd658b
                  release=sonarqube
                  statefulset.kubernetes.io/pod-name=sonarqube-sonarqube-0
Annotations:      checksum/config: 5544f38762b6fef34865aa34057b29c4288cb68ea50ed7dea0e00d2bae2a147d
                  checksum/init-fs: 666df574d12897dd258754e9dc0f8be60e51a5dae582657f398cd1d21dfc0967
                  checksum/init-sysctl: f368552ed051397020914eb3dfbb8a13fa2745e307f7e1f3665ddbdd267264a1
                  checksum/plugins: c5a8e5568c6a915d43f0e5bd0307360d3b146896677b5c94f94762abda803748
                  checksum/secret: 870e91703075b4f93cff434e8a264e1709f6dd7a8e095a5bb55811c96cbb5fd9
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    StatefulSet/sonarqube-sonarqube
Init Containers:
  init-sysctl:
    Image:      sonarqube:10.6.0-enterprise
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/bash
      -e
      /tmp/scripts/init_sysctl.sh
    Environment:
      SONAR_WEB_CONTEXT:   /
      SONAR_WEB_JAVAOPTS:  
      SONAR_CE_JAVAOPTS:   
    Mounts:
      /tmp/scripts/ from init-sysctl (rw)
Containers:
  sonarqube:
    Image:           sonarqube:10.6.0-enterprise
    Port:            9000/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Limits:
      cpu:                800m
      ephemeral-storage:  512G
      memory:             6G
    Requests:
      cpu:                400m
      ephemeral-storage:  5G
      memory:             5G
    Liveness:             exec [sh -c wget --no-proxy --quiet -O /dev/null --timeout=1 --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:9000/api/system/liveness"
] delay=60s timeout=1s period=30s #success=1 #failure=6
    Readiness:  exec [sh -c #!/bin/bash
# A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
# status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
if wget --no-proxy -qO- http://localhost:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
  exit 0
fi
exit 1
] delay=60s timeout=1s period=30s #success=1 #failure=6
    Startup:  http-get http://:http/api/system/status delay=30s timeout=1s period=10s #success=1 #failure=24
    Environment Variables from:
      sonarqube-sonarqube-jdbc-config  ConfigMap  Optional: false
    Environment:
      SONAR_HELM_CHART_VERSION:  10.6.1_3163
      SONAR_JDBC_PASSWORD:       <set to the key 'jdbc-password' in secret 'sonarqube-sonarqube'>                                 Optional: false
      SONAR_WEB_SYSTEMPASSCODE:  <set to the key 'SONAR_WEB_SYSTEMPASSCODE' in secret 'sonarqube-sonarqube-monitoring-passcode'>  Optional: false
      SONAR_WEB_CONTEXT:         /
      SONAR_WEB_JAVAOPTS:        
      SONAR_CE_JAVAOPTS:         
    Mounts:
      /opt/sonarqube/data from sonarqube (rw,path="data")
      /opt/sonarqube/extensions from sonarqube (rw,path="extensions")
      /opt/sonarqube/logs from sonarqube (rw,path="logs")
      /opt/sonarqube/temp from sonarqube (rw,path="temp")
      /tmp from tmp-dir (rw)
Volumes:
  init-sysctl:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sonarqube-sonarqube-init-sysctl
    Optional:  false
  sonarqube:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sonarqube-sonarqube
    ReadOnly:   false
  tmp-dir:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      
    SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  sonarqube=true
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                 sonarqube=true:NoSchedule
Events:          <none>

kubectl events --for pod/sonarqube-sonarqube-0 -n sonarqube

LAST SEEN           TYPE      REASON             OBJECT                      MESSAGE
35m                 Normal    Scheduled          Pod/sonarqube-sonarqube-0   Successfully assigned sonarqube/sonarqube-sonarqube-0 to ip-172-31-89-17.ec2.internal
35m                 Normal    Pulling            Pod/sonarqube-sonarqube-0   Pulling image "sonarqube:10.6.0-enterprise"
34m                 Normal    Pulled             Pod/sonarqube-sonarqube-0   Successfully pulled image "sonarqube:10.6.0-enterprise" in 53.696s (53.696s including waiting)
34m                 Normal    Created            Pod/sonarqube-sonarqube-0   Created container init-sysctl
34m                 Normal    Started            Pod/sonarqube-sonarqube-0   Started container init-sysctl
33m                 Normal    Pulled             Pod/sonarqube-sonarqube-0   Container image "sonarqube:10.6.0-enterprise" already present on machine
33m                 Normal    Created            Pod/sonarqube-sonarqube-0   Created container sonarqube
33m                 Normal    Started            Pod/sonarqube-sonarqube-0   Started container sonarqube
33m                 Warning   Unhealthy          Pod/sonarqube-sonarqube-0   Startup probe failed: Get "http://172.31.84.135:9000/api/system/status": dial tcp 172.31.84.135:9000: connect: connection refused
32m (x2 over 33m)   Warning   Unhealthy          Pod/sonarqube-sonarqube-0   Startup probe failed: Get "http://172.31.84.135:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
30m (x7 over 32m)   Warning   Unhealthy          Pod/sonarqube-sonarqube-0   Readiness probe failed:
30m                 Warning   Unhealthy          Pod/sonarqube-sonarqube-0   Liveness probe failed:
12m                 Normal    Killing            Pod/sonarqube-sonarqube-0   Stopping container sonarqube
7m5s                Warning   FailedScheduling   Pod/sonarqube-sonarqube-0   running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "sonarqube-sonarqube-0" not found

Looks like it was permission based. it used ebs as an external provisioner and an IAM role is needed with aws

Thank you!