Deploying sonarqube using helm chart provided by sonarqube

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
  • how is SonarQube deployed: zip, Docker, Helm
  • what are you trying to achieve
  • what have you tried so far to achieve this

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

Hi all,

I’m trying to deploy sonarqube community edition: 10.5.0 using helm chart on GKE. I tried with both as a deployment and statefulset. I could confirm that pod is up and running by checking the pod logs. How ever, When I accessing the Host url, I was getting 404 error. In the ingress logs, I could see unhealthy back ends. Can someone help me?

The below is the values.yaml config I was using.

`# Default values for sonarqube.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# If the deployment Type is set to Deployment sonarqube is deployed as a replica set.
deploymentType: "Deployment"

# There should not be more than 1 sonarqube instance connected to the same database. Please set this value to 1 or 0 (in case you need to scale down programmatically).
replicaCount: 1

# How many revisions to retain (Deployment ReplicaSets or StatefulSets)
revisionHistoryLimit: 10

# This will use the default deployment strategy unless it is overriden
deploymentStrategy: {}
# Uncomment this to scheduler pods on priority
# priorityClassName: "high-priority"

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Is this deployment for OpenShift? If so, we help with SCCs
OpenShift:
  enabled: false
  createSCC: true

edition: "community"

image:
  repository: sonarqube
  tag: 10.5.0-{{ .Values.edition }}
  pullPolicy: IfNotPresent
  # If using a private repository, the imagePullSecrets to use
  # pullSecrets:
  #   - name: my-repo-secret

# Set security context for sonarqube pod
securityContext:
  fsGroup: 0

# Set security context for sonarqube container
containerSecurityContext:
  # Sonarqube dockerfile creates sonarqube user as UID and GID 0
  # Those default are used to match pod security standard restricted as least privileged approach
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  runAsUser: 1000
  runAsGroup: 0
  seccompProfile:
    type: RuntimeDefault
  capabilities:
    drop: ["ALL"]

# Settings to configure elasticsearch host requirements
elasticsearch:
  # DEPRECATED: Use initSysctl.enabled instead
  configureNode: false
  bootstrapChecks: true

service:
  type: ClusterIP
  externalPort: 9000
  internalPort: 9000
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
caCerts:
  enabled: false
# Optionally create Network Policies
networkPolicy:
  enabled: false

  # If you plan on using the jmx exporter, you need to define where the traffic is coming from
  prometheusNamespace: "monitoring"

  # If you are using a external database and enable network Policies to be created
  # you will need to explicitly allow egress traffic to your database
  # expects https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io
  # additionalNetworkPolicys:

# will be used as default for ingress path and probes path, will be injected in .Values.env as SONAR_WEB_CONTEXT
# if .Values.env.SONAR_WEB_CONTEXT is set, this value will be ignored
sonarWebContext: "<company.com>"

# (DEPRECATED) please use ingress-nginx instead
# nginx:
#   enabled: false

# Install the nginx ingress helm chart
ingress-nginx:
  enabled: false

  # You can add here any values from the official nginx ingress chart
  # controller:
  #   replicaCount: 3

ingress:
  enabled: true
  # Used to create an Ingress record.
  hosts:
    - name: <company.com>
      # Different clouds or configurations might need /* as the default path
      path: /
      # For additional control over serviceName and servicePort
      #serviceName: someService
      #servicePort: somePort
      pathType: Prefix
  ingressClassName: "gce-internal"
  annotations:
   #kubernetes.io/ingress.class: "gce-internal"
   ingress.gcp.kubernetes.io/pre-shared-cert: "sonarqube-cert"
   kubernetes.io/ingress.regional-static-ip-name: "sonarqube"

  # Set the ingressClassName on the ingress record
  # ingressClassName: nginx

# Additional labels for Ingress manifest file
  # labels:
  #  traffic-type: external
  #  traffic-type: internal
  tls: []
  # Secrets must be manually created in the namespace. To generate a self-signed certificate (and private key) and then create the secret in the cluster please refer to official documentation available at https://kubernetes.github.io/ingress-nginx/user-guide/tls/#tls-secrets
  # - secretName: chart-example-tls
  #   hosts:
  #     - chart-example.local

route:
  enabled: false
  host: ""
  # Add tls section to secure traffic. TODO: extend this section with other secure route settings
  # Comment this out if you want plain http route created.
  tls:
    termination: edge

  annotations: {}
  # See Openshift/OKD route annotation
  # https://docs.openshift.com/container-platform/4.10/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
  # haproxy.router.openshift.io/timeout: 1m

  # Additional labels for Route manifest file
  # labels:
  #  external: 'true'

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
# taint a node with the following command to mark it as not schedulable for new pods
# kubectl taint nodes <node> sonarqube=true:NoSchedule
# The following statement will tolerate this taint and as such reverse a node for sonarqube
tolerations:
  - key: "app"
    value: "sonarcube"
    effect: "NoSchedule"

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# add a label to a node with the following command.
# kubectl label node <node> sonarqube=true
nodeSelector: {}
#  sonarqube: "true"

# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
#   hostnames:
#   - "example.com"
#   - "www.example.com"

readinessProbe:
  exec: 
    command:
    - sh
    - -c
    - | 
      #!/bin/bash
      # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
      # status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
      if wget --no-proxy -qO- http://localhost:{{ .Values.service.internalPort }}{{ .Values.readinessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
        exit 0
      fi
      exit 1
  initialDelaySeconds: 60
  periodSeconds: 30
  failureThreshold: 6
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

livenessProbe:
  exec: 
    command: 
    - sh
    - -c
    - | 
      wget --no-proxy --quiet -O /dev/null --timeout={{ .Values.livenessProbe.timeoutSeconds }} --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:{{ .Values.service.internalPort }}{{ .Values.livenessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/liveness"

  initialDelaySeconds: 60
  periodSeconds: 30
  failureThreshold: 6
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

startupProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
  failureThreshold: 24
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

initContainers:
  # image: busybox:1.36
  # We allow the init containers to have a separate security context declaration because
  # the initContainer may not require the same as SonarQube.
  # Those default are used to match pod security standard restricted as least privileged approach
  securityContext:
    allowPrivilegeEscalation: false
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 0
    seccompProfile:
      type: RuntimeDefault
    capabilities:
      drop: ["ALL"]
  # We allow the init containers to have a separate resources declaration because
  # the initContainer does not take as much resources.
  resources: {}

# Extra init containers to e.g. download required artifacts
extraInitContainers: {}

## Array of extra containers to run alongside the sonarqube container
##
## Example:
## - name: myapp-container
##   image: busybox
##   command: ['sh', '-c', 'echo Hello && sleep 3600']
##
extraContainers: []


initSysctl:
  enabled: true
  vmMaxMapCount: 524288
  fsFileMax: 131072
  nofile: 131072
  nproc: 8192
  # image: busybox:1.36
  securityContext:
    # Compatible with podSecurity standard privileged
    privileged: true
    # if run without root permissions, error "sysctl: permission denied on key xxx, ignoring"
    runAsUser: 0
  # resources: {}

# This should not be required anymore, used to chown/chmod folder created by faulty CSI driver that are not applying properly POSIX fsgroup.
initFs:
  enabled: true
  # Image: busybox:1.36
  # Compatible with podSecurity standard baseline.
  securityContext:
    privileged: false
    runAsNonRoot: false
    runAsUser: 0
    runAsGroup: 0
    seccompProfile:
      type: RuntimeDefault
    capabilities:
      drop: ["ALL"]
      add: ["CHOWN"]

prometheusExporter:
  enabled: false
  # jmx_prometheus_javaagent version to download from Maven Central
  version: "0.17.2"
  # Alternative full download URL for the jmx_prometheus_javaagent.jar (overrides prometheusExporter.version)
  # downloadURL: ""
  # if you need to ignore TLS certificates for whatever reason enable the following flag
  noCheckCertificate: false

  # Ports for the jmx prometheus agent to export metrics at
  webBeanPort: 8000
  ceBeanPort: 8001

  config:
    rules:
      - pattern: ".*"
  # Overrides config for the CE process Prometheus exporter (by default, the same rules are used for both the Web and CE processes).
  # ceConfig:
  #   rules:
  #     - pattern: ".*"
  # image: curlimages/curl:8.2.1
  # For use behind a corporate proxy when downloading prometheus
  # httpProxy: ""
  # httpsProxy: ""
  # noProxy: ""
  # Reuse default initcontainers.securityContext that match restricted pod security standard
  # securityContext: {}

prometheusMonitoring:
  # Generate a Prometheus Pod Monitor (https://github.com/coreos/prometheus-operator)
  #
  podMonitor:
    # Create PodMonitor Resource for Prometheus scraping
    enabled: false
    # (DEPRECATED) Specify a custom namespace where the PodMonitor will be created.
    # This value should not be set, as the PodMonitor's namespace has to match the Release Namespace.
    # namespace: "default"
    # Specify the interval how often metrics should be scraped
    interval: 30s
    # Specify the timeout after a scrape is ended
    # scrapeTimeout: ""
    # Name of the label on target services that prometheus uses as job name
    # jobLabel: ""

# List of plugins to install.
# For example:
# plugins:
#  install:
#    - "https://github.com/AmadeusITGroup/sonar-stash/releases/download/1.3.0/sonar-stash-plugin-1.3.0.jar"
#    - "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
#
plugins:
  install: []

  # For use behind a corporate proxy when downloading plugins
  # httpProxy: ""
  # httpsProxy: ""
  # noProxy: ""

  # image: curlimages/curl:8.2.1
  # resources: {}

  # .netrc secret file with a key "netrc" to use basic auth while downloading plugins
  # netrcCreds: ""

  # Set to true to not validate the server's certificate to download plugin
  noCheckCertificate: false
  # Reuse default initcontainers.securityContext that match restricted pod security standard
  # securityContext: {}

## (DEPRECATED) The following value sets SONAR_WEB_JAVAOPTS (e.g., jvmOpts: "-Djava.net.preferIPv4Stack=true"). However, this is deprecated, please set SONAR_WEB_JAVAOPTS or sonar.web.javaOpts directly instead.
jvmOpts: ""

## (DEPRECATED) The following value sets SONAR_CE_JAVAOPTS. However, this is deprecated, please set SONAR_CE_JAVAOPTS or sonar.ce.javaOpts directly instead.
jvmCeOpts: ""

## a monitoring passcode needs to be defined in order to get reasonable probe results
# not setting the monitoring passcode will result in a deployment that will never be ready
monitoringPasscode: "define_it"
# Alternatively, you can define the passcode loading it from an existing secret specifying the right key
# monitoringPasscodeSecretName: "pass-secret-name"
# monitoringPasscodeSecretKey: "pass-key"

## Environment variables to attach to the pods
##
# env:
#   # If you use a different ingress path from /, you have to add it here as the value of SONAR_WEB_CONTEXT
#   - name: SONAR_WEB_CONTEXT
#     value: /sonarqube
#   - name: VARIABLE
#     value: my-value

# Set annotations for pods
annotations: {}

## We usually don't make specific resource recommendations, as they are heavily dependend on
## the usage of SonarQube and the surrounding infrastructure.
## Those default are based on the default Web -Xmx1G -Xms128m and CE -Xmx2G -Xms128m and Search -Xmx2G -Xms2G settings of SQ sub processes
## Adjust these values to your needs, you can find more details on the main README of the chart.
resources:
  limits:
    cpu: 2000m
    memory: 4096M
    ephemeral-storage: 512000M
  requests:
    cpu: 400m
    memory: 2048M
    ephemeral-storage: 1536M

persistence:
  enabled: true
  ## Set annotations on pvc
  

  ## Specify an existing volume claim instead of creating a new one.
  ## When using this option all following options like storageClass, accessMode and size are ignored.
  existingClaim: sonarqube-pvc

  volumes:
  - name: certs-volume
    secret:
      secretName: certs-secret
  mounts:
  - name: certs-volume
    mountPath: /tmp/custom-certs


  

  ## Specify extra volumes. Refer to ".spec.volumes" specification : https://kubernetes.io/fr/docs/concepts/storage/volumes/
  
  ## Specify extra mounts. Refer to ".spec.containers.volumeMounts" specification : https://kubernetes.io/fr/docs/concepts/storage/volumes/





postgresql: 
  enabled: false
## Override JDBC values
## for external Databases

jdbcOverwrite:
  # If enable the JDBC Overwrite, make sure to set `postgresql.enabled=false`
  enable: true
  # The JDBC url of the external DB
  jdbcUrl: "jdbc:postgresql://<myip>:5432/sonar?ssl=true&sslmode=verify-ca&sslrootcert=/tmp/custom-certs/server-ca.pem&sslkey=/tmp/custom-certs/client-key.pk8&sslcert=/tmp/custom-certs/client-cert.pem"
  # The DB user that should be used for the JDBC connection
  jdbcUsername: "sonarcube""
  jdbcSecretName: "sonar"
  jdbcSecretPasswordKey: "password"
  
podLabels: {}

sonarqubeFolder: /opt/sonarqube

tests:
  image: ""
  enabled: true
  resources:
    limits:
      cpu: 500m
      memory: 200M

# For OpenShift set create=true to ensure service account is created.
serviceAccount:
  create: false
  # name:
  # automountToken: false # default
  ## Annotations for the Service Account
  annotations: {}



extraConfig:
  secrets: []
  configmaps: []



terminationGracePeriodSeconds: 60`

Below are the pod logs

2024.06.10 21:12:26 INFO  ce[][c.z.h.HikariDataSource] HikariPool-1 - Starting...
2024.06.10 21:12:27 INFO  ce[][c.z.h.p.HikariPool] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@2515b68
2024.06.10 21:12:27 INFO  ce[][c.z.h.HikariDataSource] HikariPool-1 - Start completed.
2024.06.10 21:12:30 INFO  ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2024.06.10 21:12:30 INFO  ce[][o.s.c.c.CePluginRepository] Load plugins
2024.06.10 21:12:33 INFO  ce[][o.s.c.c.ComputeEngineContainerImpl] Running Community edition
2024.06.10 21:12:33 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is started
2024.06.10 21:12:33 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
2024.06.10 21:12:33 INFO  app[][o.s.a.SchedulerImpl] SonarQube is operational

I tried the connection to kubernetes service from local using kubectl port-forward to check whether the service is fine but unfortunately I’ve got into 404 Error.

Hi,

What does this mean, exactly?

And a 404 hints that the server is up and running, and you’re just trying a bad path. In fact,

you’ve set a web context, which means that instead of accessing SonarQube at http://domainOrIp:9000 you should be hitting http://domainOrIp:9000/<company.com>. Are you?

 
Ann

Hi ,

Thanks for responding to my post! I’ve modified webcontext to null and updated annotation for GKE ingress with below details.

sonarWebContext: ""

ingress:
  enabled: true
  # Used to create an Ingress record.
  hosts:
    - name: <company.com>
      # Different clouds or configurations might need /* as the default path
      path: /
      # For additional control over serviceName and servicePort
      #serviceName: someService
      #servicePort: somePort
      #pathType: Prefix
  #ingressClassName: "gce-internal"
  annotations:
   kubernetes.io/ingress.class: "gce-internal"
   ingress.gcp.kubernetes.io/pre-shared-cert: "sonarqube-cert"
   kubernetes.io/ingress.regional-static-ip-name: "sonarqube"
   kubernetes.io/ingress.allow-http: "false"

After these adjustments the ingress is up and When I access the this url at “https://<company.com>”, I’m getting a webpage with “loading” as content. I further checked the container logs with in the folder /opt/sonarqube/logs to check the requests coming into it, I observed that it’s getting liveness probe failed. Could you please help me?



10.24.227.1 - - [12/Jun/2024:16:11:50 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "1a4e6f68-9b96-4005-b7d8-b3072a50fc60" 14
10.24.227.1 - - [12/Jun/2024:16:11:51 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "0303d1b0-849a-42ce-b2ce-200a237fd962" 14
10.24.227.1 - - [12/Jun/2024:16:12:05 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "4afe397a-69f6-466d-9cef-a9a63a62f6a7" 15
10.24.227.1 - - [12/Jun/2024:16:12:05 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "429f5696-f671-4529-bda5-ae8e7c128f3e" 13
10.24.227.1 - - [12/Jun/2024:16:12:06 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "543ffe26-608c-4bfd-a280-f1416c3f859e" 15
127.0.0.1 - - [12/Jun/2024:16:12:09 +0000] "GET /api/system/status HTTP/1.1" 200 - "-" "Wget/1.21.2" "4b43dc9e-4921-499a-99fc-b1b9fe853b5a" 21
127.0.0.1 - - [12/Jun/2024:16:12:09 +0000] "GET /api/system/liveness HTTP/1.1" 204 - "-" "Wget/1.21.2" "cb46fe32-1738-4f65-aa56-b77dc40b16fc" 55
10.24.227.1 - - [12/Jun/2024:16:12:20 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "2fc42c93-c21d-4006-a53b-15d1422027a7" 14
10.24.227.1 - - [12/Jun/2024:16:12:20 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "8b3ffe67-1741-4f31-ae2a-a6ae803b4632" 14
10.24.227.1 - - [12/Jun/2024:16:12:21 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "dab2ac07-17e5-4d7f-af66-bb5c3a856763" 14
10.24.227.1 - - [12/Jun/2024:16:12:35 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "ac03e808-6dfa-4485-9a0a-5b6d776e3bac" 14
10.24.227.1 - - [12/Jun/2024:16:12:35 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "46c9e7d8-c735-4ac6-89bf-8316fe5bd44b" 17
10.24.227.1 - - [12/Jun/2024:16:12:36 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "e9f915cc-772f-4c75-93f8-2ab6d34fa8ff" 14
127.0.0.1 - - [12/Jun/2024:16:12:39 +0000] "GET /api/system/status HTTP/1.1" 200 - "-" "Wget/1.21.2" "c5f0075a-029d-4ce4-9742-c1455a32d3d9" 7
127.0.0.1 - - [12/Jun/2024:16:12:39 +0000] "GET /api/system/liveness HTTP/1.1" 204 - "-" "Wget/1.21.2" "3062f91f-d3b1-4216-8092-48fdcceff2ab" 15
10.24.227.1 - - [12/Jun/2024:16:12:50 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "25eec4f8-b6d6-4284-8e7b-098a47048780" 16
10.24.227.1 - - [12/Jun/2024:16:12:50 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "e7d9299f-dd39-42b4-bde5-a8ddd98f61f0" 14
10.24.227.1 - - [12/Jun/2024:16:12:51 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "dd058744-41bc-42dd-a1f8-1cbd3a02f50d" 16
10.24.227.1 - - [12/Jun/2024:16:13:03 +0000] "GET / HTTP/1.1" 200 - "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0" "2eaea998-edd5-4119-9bb3-e45867c87fc8" 15
10.24.227.1 - - [12/Jun/2024:16:13:05 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "6424fc4e-f276-4596-82fe-e9edd49c7ac5" 14
10.24.227.1 - - [12/Jun/2024:16:13:05 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "22b72283-3e18-4081-9bd7-a75ec35ed4c7" 14
10.24.227.1 - - [12/Jun/2024:16:13:06 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "36994998-923f-4597-90ea-528c2e8beced" 13
127.0.0.1 - - [12/Jun/2024:16:13:09 +0000] "GET /api/system/status HTTP/1.1" 200 - "-" "Wget/1.21.2" "3a4cfaea-a035-47e6-95c7-77c9735d1227" 9
127.0.0.1 - - [12/Jun/2024:16:13:09 +0000] "GET /api/system/liveness HTTP/1.1" 204 - "-" "Wget/1.21.2" "ab359432-752c-424b-b4a4-603e7ced276c" 15
10.24.227.1 - - [12/Jun/2024:16:13:20 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "c9139289-cdd3-4019-a0aa-01b429223757" 15
10.24.227.1 - - [12/Jun/2024:16:13:20 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "33f4eafd-47e2-4081-9cb2-6cf215971bee" 13
10.24.227.1 - - [12/Jun/2024:16:13:21 +0000] "GET / HTTP/1.1" 200 - "-" "GoogleHC/1.0" "b574b053-57aa-4a73-a079-28b877809e67" 15

The probes configuration is defined as below


readinessProbe:
  exec: 
    command:
    - sh
    - -c
    - | 
      #!/bin/bash
      # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
      # status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
      if wget --no-proxy -qO- http://localhost:{{ .Values.service.internalPort }}{{ .Values.readinessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
        exit 0
      fi
      exit 1
  initialDelaySeconds: 300
  periodSeconds: 200
  failureThreshold: 20
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

livenessProbe:
  exec: 
    command: 
    - sh
    - -c
    - | 
      wget --no-proxy --quiet -O /dev/null --timeout={{ .Values.livenessProbe.timeoutSeconds }} --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:{{ .Values.service.internalPort }}{{ .Values.livenessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/liveness"

  initialDelaySeconds: 300
  periodSeconds: 200
  failureThreshold: 20
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

startupProbe:
  initialDelaySeconds: 120
  periodSeconds: 60
  failureThreshold: 240
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

Hi,

Have you also adjusted the SonarQube port to 80? Because by default it’s 9000.

That sounds like a JS problem. What does your browser console say?

 
Ann

I set internal and external both ports to 9000.

The webpage console shows

Failed to load resource: the server responded with a status of 404 () at different files like .css and.js.

Hi,

Can you provide one of those failing resource URLs?

 
Ann

These are urls,

https://<company.com>/js/outKLOTOZX4.js

https://company.com/js/outU5ZMYZKI.css

Hi,

Thanks for that.

There’s a problem in 10.5 (fixed in the upcoming 10.6) with some resources when a context is used. You said you

So that shouldn’t be the problem, but going back to your updated settings,

I’m not sure that’s quite what you’ve done. Instead of setting it to empty string (""), can you comment it out and try again?

 
Thx,
Ann

I’m getting below error. Looks like it required.

rror: INSTALLATION FAILED: template: sonarqube/templates/sonarqube-sts.yaml:119:15: executing "sonarqube/templates/sonarqube-sts.yaml" at <include "sonarqube.combined_env" .>: error calling include: template: sonarqube/templates/_helpers.tpl:219:81: executing "sonarqube.combined_env" at <include "sonarqube.webcontext" .>: error calling include: template: sonarqube/templates/_helpers.tpl:199:26: executing "sonarqube.webcontext" at <$tempWebcontext>: invalid value; expected string

Could you please let me know which version of image should work with sonarqube.webcontext= " " ?

Hi,

I’m a bit out of my depth. I’ve flagged this for more expert eyes.

 
Ann

Hi @dev3.

Can you share the most up-to-date version of your values.yaml, obviously redacting any sensitive information?

For reference, I just deployed a local SonarQube Community Edition using this values.yaml:

jwtSecret: "dZ0EB0KxnF++nr5+4vfTCaun/eWbv6gOoXodiAMqcFo="

deploymentType: "Deployment"

edition: "community"

sonarWebContext: /

ingress-nginx:
  enabled: true

postgresql:
  enabled: true

Also, please note that my sonarWebContext is not empty/null. It needs to be a string, even if it is an empty string ("").

This is the values.yaml file I’m using. I’ve been using an empty string as a sonar web context but still seeing that sonarweb page is loading.

# Default values for sonarqube.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# If the deployment Type is set to Deployment sonarqube is deployed as a replica set.
deploymentType: "StatefulSet"

# There should not be more than 1 sonarqube instance connected to the same database. Please set this value to 1 or 0 (in case you need to scale down programmatically).
replicaCount: 1

# How many revisions to retain (Deployment ReplicaSets or StatefulSets)
revisionHistoryLimit: 10

# This will use the default deployment strategy unless it is overriden
deploymentStrategy: {}
# Uncomment this to scheduler pods on priority
# priorityClassName: "high-priority"

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Is this deployment for OpenShift? If so, we help with SCCs
OpenShift:
  enabled: false
  createSCC: true

edition: "community"

image:
  repository: sonarqube
  tag: 10.5.0-{{ .Values.edition }}
  pullPolicy: IfNotPresent
  # If using a private repository, the imagePullSecrets to use
  # pullSecrets:
  #   - name: my-repo-secret

# Set security context for sonarqube pod
securityContext:
  fsGroup: 0

# Set security context for sonarqube container
containerSecurityContext:
  # Sonarqube dockerfile creates sonarqube user as UID and GID 0
  # Those default are used to match pod security standard restricted as least privileged approach
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  runAsUser: 1000
  runAsGroup: 0
  seccompProfile:
    type: RuntimeDefault
  capabilities:
    drop: ["ALL"]

# Settings to configure elasticsearch host requirements
elasticsearch:
  # DEPRECATED: Use initSysctl.enabled instead
  configureNode: false
  bootstrapChecks: true

service:
  type: ClusterIP
  externalPort: 9000
  internalPort: 9000
  annotations:
    #networking.gke.io/load-balancer-type: "Internal"
    cloud.google.com/neg: '{"ingress": true}'
caCerts:
  enabled: false
# Optionally create Network Policies
networkPolicy:
  enabled: false

  # If you plan on using the jmx exporter, you need to define where the traffic is coming from
  prometheusNamespace: "monitoring"

  # If you are using a external database and enable network Policies to be created
  # you will need to explicitly allow egress traffic to your database
  # expects https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io
  # additionalNetworkPolicys:

# will be used as default for ingress path and probes path, will be injected in .Values.env as SONAR_WEB_CONTEXT
# if .Values.env.SONAR_WEB_CONTEXT is set, this value will be ignored
sonarWebContext: ""

# (DEPRECATED) please use ingress-nginx instead
# nginx:
#   enabled: false

# Install the nginx ingress helm chart
ingress-nginx:
  enabled: false

  # You can add here any values from the official nginx ingress chart
  # controller:
  #   replicaCount: 3

ingress:
  enabled: true
  # Used to create an Ingress record.
  hosts:
    - name: <company.com>
      # Different clouds or configurations might need /* as the default path
      path: /
      # For additional control over serviceName and servicePort
      #serviceName: someService
      #servicePort: somePort
      #pathType: Prefix
  #ingressClassName: "gce-internal"
  annotations:
   kubernetes.io/ingress.class: "gce-internal"
   ingress.gcp.kubernetes.io/pre-shared-cert: "sonarqube-cert"
   kubernetes.io/ingress.regional-static-ip-name: "sonarqube"
   kubernetes.io/ingress.allow-http: "false"

  # Set the ingressClassName on the ingress record
  # ingressClassName: nginx

# Additional labels for Ingress manifest file
  # labels:
  #  traffic-type: external
  #  traffic-type: internal
  tls: []
  # Secrets must be manually created in the namespace. To generate a self-signed certificate (and private key) and then create the secret in the cluster please refer to official documentation available at https://kubernetes.github.io/ingress-nginx/user-guide/tls/#tls-secrets
  # - secretName: chart-example-tls
  #   hosts:
  #     - chart-example.local

route:
  enabled: false
  host: ""
  # Add tls section to secure traffic. TODO: extend this section with other secure route settings
  # Comment this out if you want plain http route created.
  tls:
    termination: edge

  annotations: {}
  # See Openshift/OKD route annotation
  # https://docs.openshift.com/container-platform/4.10/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
  # haproxy.router.openshift.io/timeout: 1m

  # Additional labels for Route manifest file
  # labels:
  #  external: 'true'

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
# taint a node with the following command to mark it as not schedulable for new pods
# kubectl taint nodes <node> sonarqube=true:NoSchedule
# The following statement will tolerate this taint and as such reverse a node for sonarqube
tolerations:
  - key: "app"
    value: "sonarcube"
    effect: "NoSchedule"

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# add a label to a node with the following command.
# kubectl label node <node> sonarqube=true
nodeSelector: {}
#  sonarqube: "true"

# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
#   hostnames:
#   - "example.com"
#   - "www.example.com"

readinessProbe:
  exec: 
    command:
    - sh
    - -c
    - | 
      #!/bin/bash
      # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
      # status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
      if wget --no-proxy -qO- http://localhost:{{ .Values.service.internalPort }}{{ .Values.readinessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
        exit 0
      fi
      exit 1
  initialDelaySeconds: 300
  periodSeconds: 200
  failureThreshold: 20
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

livenessProbe:
  exec: 
    command: 
    - sh
    - -c
    - | 
      wget --no-proxy --quiet -O /dev/null --timeout={{ .Values.livenessProbe.timeoutSeconds }} --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:{{ .Values.service.internalPort }}{{ .Values.livenessProbe.sonarWebContext | default (include "sonarqube.webcontext" .) }}api/system/liveness"

  initialDelaySeconds: 300
  periodSeconds: 200
  failureThreshold: 20
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

startupProbe:
  initialDelaySeconds: 120
  periodSeconds: 60
  failureThreshold: 240
  # Note that timeoutSeconds was not respected before Kubernetes 1.20 for exec probes
  timeoutSeconds: 1
  # If an ingress *path* other than the root (/) is defined, it should be reflected here
  # A trailing "/" must be included
  # deprecated please use sonarWebContext at the value top level
  # sonarWebContext: /

initContainers:
  # image: busybox:1.36
  # We allow the init containers to have a separate security context declaration because
  # the initContainer may not require the same as SonarQube.
  # Those default are used to match pod security standard restricted as least privileged approach
  securityContext:
    allowPrivilegeEscalation: false
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 0
    seccompProfile:
      type: RuntimeDefault
    capabilities:
      drop: ["ALL"]
  # We allow the init containers to have a separate resources declaration because
  # the initContainer does not take as much resources.
  resources: {}

# Extra init containers to e.g. download required artifacts
extraInitContainers: {}

## Array of extra containers to run alongside the sonarqube container
##
## Example:
## - name: myapp-container
##   image: busybox
##   command: ['sh', '-c', 'echo Hello && sleep 3600']
##
extraContainers: []


initSysctl:
  enabled: true
  vmMaxMapCount: 524288
  fsFileMax: 131072
  nofile: 131072
  nproc: 8192
  # image: busybox:1.36
  securityContext:
    # Compatible with podSecurity standard privileged
    privileged: true
    # if run without root permissions, error "sysctl: permission denied on key xxx, ignoring"
    runAsUser: 0
  # resources: {}

# This should not be required anymore, used to chown/chmod folder created by faulty CSI driver that are not applying properly POSIX fsgroup.
initFs:
  enabled: true
  # Image: busybox:1.36
  # Compatible with podSecurity standard baseline.
  securityContext:
    privileged: false
    runAsNonRoot: false
    runAsUser: 0
    runAsGroup: 0
    seccompProfile:
      type: RuntimeDefault
    capabilities:
      drop: ["ALL"]
      add: ["CHOWN"]

prometheusExporter:
  enabled: false
  # jmx_prometheus_javaagent version to download from Maven Central
  version: "0.17.2"
  # Alternative full download URL for the jmx_prometheus_javaagent.jar (overrides prometheusExporter.version)
  # downloadURL: ""
  # if you need to ignore TLS certificates for whatever reason enable the following flag
  noCheckCertificate: false

  # Ports for the jmx prometheus agent to export metrics at
  webBeanPort: 8000
  ceBeanPort: 8001

  config:
    rules:
      - pattern: ".*"
  # Overrides config for the CE process Prometheus exporter (by default, the same rules are used for both the Web and CE processes).
  # ceConfig:
  #   rules:
  #     - pattern: ".*"
  # image: curlimages/curl:8.2.1
  # For use behind a corporate proxy when downloading prometheus
  # httpProxy: ""
  # httpsProxy: ""
  # noProxy: ""
  # Reuse default initcontainers.securityContext that match restricted pod security standard
  # securityContext: {}

prometheusMonitoring:
  # Generate a Prometheus Pod Monitor (https://github.com/coreos/prometheus-operator)
  #
  podMonitor:
    # Create PodMonitor Resource for Prometheus scraping
    enabled: false
    # (DEPRECATED) Specify a custom namespace where the PodMonitor will be created.
    # This value should not be set, as the PodMonitor's namespace has to match the Release Namespace.
    # namespace: "default"
    # Specify the interval how often metrics should be scraped
    interval: 30s
    # Specify the timeout after a scrape is ended
    # scrapeTimeout: ""
    # Name of the label on target services that prometheus uses as job name
    # jobLabel: ""

# List of plugins to install.
# For example:
# plugins:
#  install:
#    - "https://github.com/AmadeusITGroup/sonar-stash/releases/download/1.3.0/sonar-stash-plugin-1.3.0.jar"
#    - "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
#
plugins:
  install: []

  # For use behind a corporate proxy when downloading plugins
  # httpProxy: ""
  # httpsProxy: ""
  # noProxy: ""

  # image: curlimages/curl:8.2.1
  # resources: {}

  # .netrc secret file with a key "netrc" to use basic auth while downloading plugins
  # netrcCreds: ""

  # Set to true to not validate the server's certificate to download plugin
  noCheckCertificate: false
  # Reuse default initcontainers.securityContext that match restricted pod security standard
  # securityContext: {}

## (DEPRECATED) The following value sets SONAR_WEB_JAVAOPTS (e.g., jvmOpts: "-Djava.net.preferIPv4Stack=true"). However, this is deprecated, please set SONAR_WEB_JAVAOPTS or sonar.web.javaOpts directly instead.
jvmOpts: ""

## (DEPRECATED) The following value sets SONAR_CE_JAVAOPTS. However, this is deprecated, please set SONAR_CE_JAVAOPTS or sonar.ce.javaOpts directly instead.
jvmCeOpts: ""

## a monitoring passcode needs to be defined in order to get reasonable probe results
# not setting the monitoring passcode will result in a deployment that will never be ready
monitoringPasscode: "define_it"
# Alternatively, you can define the passcode loading it from an existing secret specifying the right key
# monitoringPasscodeSecretName: "pass-secret-name"
# monitoringPasscodeSecretKey: "pass-key"

## Environment variables to attach to the pods
##
# env:
#   # If you use a different ingress path from /, you have to add it here as the value of SONAR_WEB_CONTEXT
#   - name: SONAR_WEB_CONTEXT
#     value: /sonarqube
#   - name: VARIABLE
#     value: my-value

# Set annotations for pods
annotations: {}

## We usually don't make specific resource recommendations, as they are heavily dependend on
## the usage of SonarQube and the surrounding infrastructure.
## Those default are based on the default Web -Xmx1G -Xms128m and CE -Xmx2G -Xms128m and Search -Xmx2G -Xms2G settings of SQ sub processes
## Adjust these values to your needs, you can find more details on the main README of the chart.
resources:
  limits:
    cpu: 2000m
    memory: 4096M
    ephemeral-storage: 512000M
  requests:
    cpu: 400m
    memory: 2048M
    ephemeral-storage: 1536M

persistence:
  enabled: true
  ## Set annotations on pvc
  

  ## Specify an existing volume claim instead of creating a new one.
  ## When using this option all following options like storageClass, accessMode and size are ignored.
  existingClaim: sonarqube-pvc

  volumes:
  - name: certs-volume
    secret:
      secretName: certs-secret
  mounts:
  - name: certs-volume
    mountPath: /tmp/custom-certs

  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  

  ## Specify extra volumes. Refer to ".spec.volumes" specification : https://kubernetes.io/fr/docs/concepts/storage/volumes/
  
  ## Specify extra mounts. Refer to ".spec.containers.volumeMounts" specification : https://kubernetes.io/fr/docs/concepts/storage/volumes/


# In case you want to specify different resources for emptyDir than {}
emptyDir: {}
  # Example of resouces that might be used:
  # medium: Memory
  # sizeLimit: 16Mi

# A custom sonar.properties file can be provided via dictionary.
# For example:
# sonarProperties:
#   sonar.forceAuthentication: true
#   sonar.security.realm: LDAP
#   ldap.url: ldaps://organization.com

# Additional sonar properties to load from a secret with a key "secret.properties" (must be a string)
# sonarSecretProperties:

# Kubernetes secret that contains the encryption key for the sonarqube instance.
# The secret must contain the key 'sonar-secret.txt'.
# The 'sonar.secretKeyPath' property will be set automatically.
# sonarSecretKey: "settings-encryption-secret"
postgresql: 
  enabled: false
## Override JDBC values
## for external Databases
jdbcOverwrite:
  # If enable the JDBC Overwrite, make sure to set `postgresql.enabled=false`
  enable: true
  # The JDBC url of the external DB
  jdbcUrl: "jdbc:postgresql://<ip>:5432/sonar?ssl=true&sslmode=verify-ca&sslrootcert=/tmp/custom-certs/server-ca.pem&sslkey=/tmp/custom-certs/client-key.pk8&sslcert=/tmp/custom-certs/client-cert.pem"
  # The DB user that should be used for the JDBC connection
  jdbcUsername: "sonarcube"
  # Use this if you don't mind the DB password getting stored in plain text within the values file
  #jdbcPassword: "sonarPass"
  jdbcSecretName: "sonar"
  jdbcSecretPasswordKey: "password"
  ## Alternatively, use a pre-existing k8s secret containing the DB password
  # jdbcSecretName: "sonarqube-jdbc"
  ## and the secretValueKey of the password found within that secret
  # jdbcSecretPasswordKey: "jdbc-password"

## (DEPRECATED) Configuration values for postgresql dependency
## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md
# Additional labels to add to the pods:
# podLabels:
#   key: value
podLabels: {}
# For compatibility with 8.0 replace by "/opt/sq"
# For compatibility with 8.2, leave the default. They changed it back to /opt/sonarqube
sonarqubeFolder: /opt/sonarqube

tests:
  image: ""
  enabled: true
  resources:
    limits:
      cpu: 500m
      memory: 200M

# For OpenShift set create=true to ensure service account is created.
serviceAccount:
  create: false
  # name:
  # automountToken: false # default
  ## Annotations for the Service Account
  annotations: {}

# extraConfig is used to load Environment Variables from Secrets and ConfigMaps
# which may have been written by other tools, such as external orchestrators.
#
# These Secrets/ConfigMaps are expected to contain Key/Value pairs, such as:
#
# apiVersion: v1
# kind: ConfigMap
# metadata:
#   name: external-sonarqube-opts
# data:
#   SONARQUBE_JDBC_USERNAME: foo
#   SONARQUBE_JDBC_URL: jdbc:postgresql://db.example.com:5432/sonar
#
# These vars can then be injected into the environment by uncommenting the following:
#
# extraConfig:
#   configmaps:
#     - external-sonarqube-opts

extraConfig:
  secrets: []
  configmaps: []

# account:
# The values can be set to define the current and the (new) custom admin passwords at the startup (the username will remain "admin")
#   adminPassword: admin
#   currentAdminPassword: admin
# The above values can be also provided by a secret that contains "password" and "currentPassword" as keys. You can generate such a secret in your cluster
# using "kubectl create secret generic admin-password-secret-name --from-literal=password=admin --from-literal=currentPassword=admin"
#   adminPasswordSecretName: ""
# # Reuse default initcontainers.securityContext that match restricted pod security standard
# #   securityContext: {}
#   resources:
#     limits:
#       cpu: 100m
#       memory: 128Mi
#     requests:
#       cpu: 100m
#       memory: 128Mi
# curlContainerImage: curlimages/curl:8.2.1
# adminJobAnnotations: {}
# deprecated please use sonarWebContext at the value top level
#   sonarWebContext: /

terminationGracePeriodSeconds: 60

1 Like

May I know, If I need to make any changes to values.yaml file? We are trying to move our existing enterprise edition instance to GKE. So, we would like to test with community edition first and then make a decision to go forward about enterprise edition. Please let me know as soon as possible.

Hi @dev3. Sorry for the late reply.

Did you have the same results with the latest 10.6 helm chart?

I don’t have a GCP ingress that I can use for tests, but it shouldn’t matter: did you try to adapt your values.yaml to the sample that I provided? Did you get the same errors?