Deploying SonarQube on AKS using MSSQL

Versions Used

  • SonarQube Server: sonarqube:lts (Long-Term Support version)
  • Database: Microsoft SQL Server
  • Kubernetes Version: Not specified
  • Scanner / Plugins: Not mentioned

:small_blue_diamond: Deployment Details

  • Deployment Method: Kubernetes (apps/v1 Deployment)
  • Storage: Using Azure File as PVC
  • Database Connection: MS SQL Server on Azure
  • Persistent Volume Claim (PVC): Configured for SonarQube data storage

:small_blue_diamond: Goal

  • Successfully deploy SonarQube on Kubernetes with an MS SQL database
  • Ensure SonarQube connects to the correct database instead of using the default H2 database

:small_blue_diamond: Issues Faced & Troubleshooting Steps

  1. SonarQube Still Uses H2 Instead of MS SQL Server
  • Logs show SonarQube is trying to connect to an H2 database (jdbc:h2:tcp://127.0.0.1:9092/sonar) instead of the configured SQL Server.
    • Possible causes:
      • The JDBC driver is missing or not properly mounted
      • SonarQube is not picking up SONARQUBE_JDBC_URL correctly
  1. Permission Issues While Editing Configuration Files
  • Attempted to modify /opt/sonarqube/conf/sonar.properties inside the pod
    • Faced Permission denied errors
    • vi and sed commands not working due to restricted access
  1. Tried Workarounds
  • Attempted to edit files inside the container but encountered read-only access
  • Suggested using a ConfigMap to override sonar.properties
  • Considered running a privileged debug pod for troubleshooting

Hey there.

This is not the right environment variable. You should use SONAR_JDBC_*` environment variables.

Please also note that there is no longer an LTS (LTA) version of SonarQube Community Build. You should switch to using either the community or latest tags.

Hi Colin after making the above changes in yaml getting these error

Defaulted container "sonarqube" out of: sonarqube, download-jdbc-driver (init)
2025.03.24 09:33:53 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2025.03.24 09:33:53 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:33321]
2025.03.24 09:33:53 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
2025.03.24 09:33:54 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
warning: no-jdk distributions that do not bundle a JDK are deprecated and will be removed in a future release
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2025.03.24 09:33:56 INFO  es[][o.e.n.Node] version[7.12.1], pid[37], build[default/tar/3186837139b9c6b6d23c3200870651f10d3343b7/2021-04-20T20:56:39.040728659Z], OS[Linux/5.15.0-1081-azure/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/11.0.11/11.0.11+9]
2025.03.24 09:33:56 INFO  es[][o.e.n.Node] JVM home [/opt/java/openjdk]
2025.03.24 09:33:56 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] loaded module [percolator]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2025.03.24 09:33:56 INFO  es[][o.e.p.PluginsService] no plugins loaded
2025.03.24 09:33:56 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/opt/sonarqube/data (//crbcjenkins.file.core.windows.net/sonarqube-main)]], net usable_space [99.9tb], net total_space [100tb], types [cifs]
2025.03.24 09:33:56 INFO  es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2025.03.24 09:33:59 ERROR es[][o.e.b.ElasticsearchUncaughtExceptionHandler] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: IndexFormatTooNewException[Format version is not supported (resource SimpleFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_5zt.cfs") [slice=_5zt.fdt]): 4 (needs to be between 1 and 3)];
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116) ~[elasticsearch-cli-7.12.1.jar:7.12.1]
        at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-7.12.1.jar:7.12.1]
Caused by: org.elasticsearch.ElasticsearchException: failed to bind service
        at org.elasticsearch.node.Node.<init>(Node.java:744) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.node.Node.<init>(Node.java:278) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:217) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:217) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:397) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.12.1.jar:7.12.1]
        ... 6 more
Caused by: org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource SimpleFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_5zt.cfs") [slice=_5zt.fdt]): 4 (needs to be between 1 and 3)
        at org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:216) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:198) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:255) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:130) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:123) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.fieldsReader(Lucene87StoredFieldsFormat.java:131) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:127) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:83) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:66) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:720) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:81) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
        at org.elasticsearch.gateway.PersistedClusterStateService.nodeMetadata(PersistedClusterStateService.java:256) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.env.NodeEnvironment.loadNodeMetadata(NodeEnvironment.java:399) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:320) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.node.Node.<init>(Node.java:352) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.node.Node.<init>(Node.java:278) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:217) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:217) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:397) ~[elasticsearch-7.12.1.jar:7.12.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.12.1.jar:7.12.1]
        ... 6 more
uncaught exception in thread [main]
ElasticsearchException[failed to bind service]; nested: IndexFormatTooNewException[Format version is not supported (resource SimpleFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_5zt.cfs") [slice=_5zt.fdt]): 4 (needs to be between 1 and 3)];
Likely root cause: org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource SimpleFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_5zt.cfs") [slice=_5zt.fdt]): 4 (needs to be between 1 and 3)
        at org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:216)
        at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:198)
        at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:255)
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:130)
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:123)
        at org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.fieldsReader(Lucene87StoredFieldsFormat.java:131)
        at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:127)
        at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:83)
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:66)
        at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:720)
        at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:81)
        at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
        at org.elasticsearch.gateway.PersistedClusterStateService.nodeMetadata(PersistedClusterStateService.java:256)
        at org.elasticsearch.env.NodeEnvironment.loadNodeMetadata(NodeEnvironment.java:399)
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:320)
        at org.elasticsearch.node.Node.<init>(Node.java:352)
        at org.elasticsearch.node.Node.<init>(Node.java:278)
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:217)
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:217)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:397)
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
        at org.elasticsearch.cli.Command.main(Command.java:79)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81)
For complete error details, refer to the log at /opt/sonarqube/logs/sonarqube.log
2025.03.24 09:33:59 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1
2025.03.24 09:33:59 INFO  app[][o.s.a.SchedulerImpl] Process[es] is stopped
2025.03.24 09:33:59 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped

Now the pod stop and goes into the crashbackloopoff here is my yaml file as well

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sonarqube
  namespace: sonarqube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarqube
  template:
    metadata:
      labels:
        app: sonarqube
    spec:
      initContainers:
        - name: download-jdbc-driver
          image: alpine:latest
          command:
            - sh
            - -c
            - |
              wget -O /mnt/mssql-jdbc.jar \
              https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.4.2.jre11/mssql-jdbc-12.4.2.jre11.jar
          volumeMounts:
            - name: jdbc-driver
              mountPath: /mnt
      containers:
        - name: sonarqube
          image: sonarqube:8.9.1-community
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
          env:
            - name: SONAR_JDBC_URL
              value: "jdbc:sqlserver://<connectionstring>.database.windows.net:1433;databaseName=sonarqube;encrypt=true;trustServerCertificate=false;loginTimeout=30;"
            - name: SONAR_JDBC_USERNAME
              value: "admin"
            - name: SONAR_JDBC_PASSWORD
              value: "%test"
          volumeMounts:
            - mountPath: "/opt/sonarqube/data"
              name: sonarqube-data
            - mountPath: "/opt/sonarqube/extensions/jdbc-driver"
              name: jdbc-driver
      volumes:
        - name: sonarqube-data
          persistentVolumeClaim:
            claimName: sonarqube-pvc
        - name: jdbc-driver
          emptyDir: {}
      nodeSelector:
        kubernetes.io/os: linux

Well, first you were using sonarqube:lts, now you’ve specified… a much older image.

Why?

I have tried to use

image: sonarqube:latest as well but still same issue.

I assume you’re still getting this error?

In that case things first, I’d suggest you drop any data in this folder (specifically the es7 / es8 directory.

Next, there is no reason for you to download the Microsoft JDBC Driver. It is bundled with SonarQube.

Overall, it looks like you’ve followed a few bad suggestions (wrong image, downloading the JDBC driver). Can I suggest that you start from scratch (remove any data from any tries you’ve already done), and maybe put AI to the side for now, and focus only on editing the values in the default (and well-documented) values.yaml file to establish DB connectivity?