Running multiple Sonarqube pods


(I was redirected to this forum from here.) I’m running SQ version 9.3 on Kubernetes, currently deployed as “deployment” with a single pod and external Postgres (external = running in the cloud). This is running OK, but every time the pod is restarted, SQ is not available for a couple of minutes (which is NOK). So, for testing, I deployed SQ using the latest helm chart and StatefulSet with replicaCount equal to 3. And everything is running perfectly normal (pods are exposed via service), reports are uploaded, no problems with users’ sessions, etc. But I see everywhere that I shouldn’t do this because the database might get corrupted. Thus, my question is: before rolling this up to production (other teams within my company), I would like to know the risk, like what exactly can go wrong and under which circumstances (e.g. what does it mean “database corruption”) with this kind of setup.


Hey there.

High availability is only available with the Data Center Edition of SonarQube (for which a Helm chart is also available). We can’t offer any support for trying to connect multiple SonarQube instances to a non-DCE edition.

What precise issues have we seen?

  • Inconsistency with the Elasticsearch Indicies (with increasing drift over time)
  • Issues with Compute Engine report processing (a report begins to be processed and is then “lost” because one node isn’t aware of the other)

The Data Center Edition includes a lot of logic for coordinating a cluster that isn’t available in non-DCE editions. We really advise against this.


Thanks for the quick response. Some additional questions:

Inconsistency with the Elasticsearch Indicies

As far as I can see, every pod has its own ES instance. But when I killed one pod, the other somehow rebuild the index based on Postgres data. Am I getting this wrong?

Issues with Compute Engine report processing

Similarly, once the report was processed and stored in Postgres, it was available.

DCE would be probably an overkill for me, as the current single pod setup works just fine, except for those pod restarts/reschedules. Is there maybe a way to have some sort of cold standby deployment of Sonarqube on Kubernetes?


Hey there.

I’m sorry – we can’t provide support for High Availability outside of the Data Center Edition.

we can’t provide support for High Availability outside of the Data Center Edition

Fair enough. Could maybe comment on the ES indices inconsistency problem (as per my understanding ES is local to the pod without any persistency and is rebuilt from Postgres)? I mean, when it can happen, etc.

Thanks again,

This is correct, unless you enable persistency. This section of the docs should help explain it:


SonarQube comes with a bundled Elasticsearch and, as Elasticsearch is stateful, so is SonarQube. There is an option to persist the Elasticsearch indexes in a Persistent Volume, but with regular killing operations by the Kubernetes Cluster, these indexes can be corrupted. By default, persistency is disabled in the Helm chart.
Enabling persistency decreases the startup time of the SonarQube Pod significantly, but you are risking corrupting your Elasticsearch index. You can enable persistency by adding the following to the values.yaml :

  enabled: true

Leaving persistency disabled results in a longer startup time until SonarQube is fully available, but you won’t lose any data as SonarQube will persist all data in the database.

When we discuss inconsistency in Elasticsearch indexes when running multiple nodes – it’s in the context of, for example, a new analysis on node1 will cause some changes to the index on node1, but nothing tells node2 it should also update the index on node2.