Must-share information
- which versions are you using: dev 2025.1
- how is SonarQube deployed: Helm on AWS EKS
- what are you trying to achieve: Memory optimization
- what have you tried so far: Default settings, double RAM, larger xms/xmx…
The pain: we get frequent memory crashes and errors like this:
2025.01.30 16:33:20 ERROR web[][o.s.s.p.w.RootFilter] Processing of request /api/project_branches/list?project=<project name> failed
java.lang.OutOfMemoryError: Java heap space
Our base Helm chart looks like this (I have anonymised the meaty bits with angle brackets):
image:
tag: 2025.1-developer
tolerations:
- key: "<tag name>"
operator: "Equal"
value: "true"
effect: "NoSchedule"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
hosts:
- name: <sonarqube fqdn>
path: /
pathType: Prefix
tls:
- secretName: <secret name>
hosts:
- <sonarqube fqdn>
postgresql:
enabled: false
jdbcOverwrite:
# If enable the JDBC Overwrite, make sure to set `postgresql.enabled=false`
enabled: true
# The JDBC url of the external DB
jdbcUrl: "jdbc:postgresql://<postgres db fqdn>/<instancename>"
# The DB user that should be used for the JDBC connection
jdbcUsername: "<DB username>"
jdbcSecretName: "<AWS secret password name>"
## and the secretValueKey of the password found within that secret
jdbcSecretPasswordKey: "<AWS secret password name key>"
sonarProperties:
sonar.forceAuthentication: true
sonar.security.realm: LDAP
ldap.url: ldaps://<ldap server fqdn>
ldap.bindDn: <fully qualified name of account that can connect to LDAP>
ldap.user.baseDn: <ldap base dn>
ldap.user.request: (&(objectClass=user)(sAMAccountName={login}))
ldap.group.baseDn: <ldap base dn of group with access>
ldap.group.request: (&(objectClass=group)(member={dn}))
ldap.group.idAttribute: sAMAccountName
# Additional sonar properties to load from a secret with a key "secret.properties" (must be a string)
sonarSecretProperties: sonarqube-secrets-sonarqube
# Plugins
plugins:
install:
- https://github.com/dependency-check/dependency-check-sonar-plugin/releases/download/5.0.0/sonar-dependency-check-plugin-5.0.0.jar
- https://github.com/jborgers/sonar-pmd/releases/download/3.5.1/sonar-pmd-plugin-3.5.1.jar
- https://github.com/checkstyle/sonar-checkstyle/releases/download/10.17.0/checkstyle-sonar-plugin-10.17.0.jar
From a “–dry-run”,. the output contains this:
...
- name: sonarqube
image: sonarqube:2025.1-developer
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 800m
ephemeral-storage: 512000M
memory: 6144M
requests:
cpu: 400m
ephemeral-storage: 1536M
memory: 2048M
env:
- name: SONAR_HELM_CHART_VERSION
value: 10.8.1
- name: SONAR_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarqube-secrets-postgresql
key: postgres-password
- name: SONAR_WEB_SYSTEMPASSCODE
valueFrom:
secretKeyRef:
name: sonarqube-sonarqube-monitoring-passcode
key: SONAR_WEB_SYSTEMPASSCODE
- name: SONAR_WEB_CONTEXT
value: /
- name: SONAR_WEB_JAVAOPTS
value: ""
- name: SONAR_CE_JAVAOPTS
value: ""
...
Things we have tried is to increase Xms and Xmx using sonar.properties in the Helm chart with no change in memory errors:
...
sonarProperties:
<all the ldap stuff goes here>
sonar.web.javaOpts: -Xmx4096m -Xms2048m
sonar.ce.javaOpts: -Xmx4096m -Xms2048m
...
We also adjusted these settings up and down and most of the time the container will not start if these are changed:
resources:
limits:
memory: 12000m
requests:
memory: 4096m
Underlying hardware has 32GB of RAM per node.
Maybe the memory adjustment syntax is wrong? Did I miss a comma/semi-colon?
Adjusting Xms and Xmx works because the --dry-run output shows these new values are in play as SONAR_WEB_JAVAOPTS and SONAR_CE_JAVAOPTS.
We have in-house Kubernetes experience but our previous Sonar Server was a VM, so it might be we need some tips for Kubernetes / Helm deployment?