SonarQube 10.4 seems to leak database connections


We recently upgraded our SonarQube installation to Enterprise EditionVersion 10.4 (build 87286) and were running the 10.3 version before.
It is deployed on a Kubernetes cluster and connected to an AWS RDS PostgreSQL database with the following plugins:

insideapp-oss/sonar-flutter 0.5.0
galexandre/sonar-cobertura latest
checkstyle/sonar-checkstyle latest
stefanrinderle/softvis3d latest
willemsrb/sonar-rci-plugin latest
willemsrb/sonar-jdepend-plugin latest
Inform-Software/sonar-groovy latest
jborgers/sonar-pmd latest

Resulting in, at the moment:

Checkstyle 10.12.5
Cobertura 2.0
Flutter 0.5.0
Groovy 1.8
Rules Compliance Index (RCI) 1.0.2
SoftVis3D 1.3.0
jDepend 1.1.1

Since the upgrade, SonarQube seems to leak DB connections, or at least keep them alive where it should not.

On the attached picture, we got near to 161 opened connections while there was no more analysis in progress.

We updated our configuration to tweak a bit the Hikari pool, but connections still aren’t released as expected.
We were using the default Database configuration for the Hikari pool before, now here is our current configuration:

	INFO  ce[][o.s.d.DefaultDatabase] Create JDBC data source for jdbc:postgresql://
	DEBUG ce[][c.z.h.HikariConfig] Driver class org.postgresql.Driver found in Thread context class loader jdk.internal.loader.ClassLoaders$AppClassLoader@75700fee
	WARN  ce[][c.z.h.HikariConfig] HikariPool-1 - keepaliveTime is greater than or equal to maxLifetime, disabling it.
	DEBUG ce[][c.z.h.HikariConfig] HikariPool-1 - configuration:
	DEBUG ce[][c.z.h.HikariConfig] allowPoolSuspension.............false
	DEBUG ce[][c.z.h.HikariConfig] autoCommit......................true
	DEBUG ce[][c.z.h.HikariConfig] catalog.........................none
	DEBUG ce[][c.z.h.HikariConfig] connectionInitSql..............."SET standard_conforming_strings=on"
	DEBUG ce[][c.z.h.HikariConfig] connectionTestQuery............."SELECT 1"
	DEBUG ce[][c.z.h.HikariConfig] connectionTimeout...............8000
	DEBUG ce[][c.z.h.HikariConfig] dataSource......................none
	DEBUG ce[][c.z.h.HikariConfig] dataSourceClassName.............none
	DEBUG ce[][c.z.h.HikariConfig] dataSourceJNDI..................none
	DEBUG ce[][c.z.h.HikariConfig] dataSourceProperties............{password=<masked>, user=pgssonp1}
	DEBUG ce[][c.z.h.HikariConfig] driverClassName................."org.postgresql.Driver"
	DEBUG ce[][c.z.h.HikariConfig] exceptionOverrideClassName......none
	DEBUG ce[][c.z.h.HikariConfig] healthCheckProperties...........{}
	DEBUG ce[][c.z.h.HikariConfig] healthCheckRegistry.............none
	DEBUG ce[][c.z.h.HikariConfig] idleTimeout.....................120000
	DEBUG ce[][c.z.h.HikariConfig] initializationFailTimeout.......1
	DEBUG ce[][c.z.h.HikariConfig] isolateInternalQueries..........false
	DEBUG ce[][c.z.h.HikariConfig] jdbcUrl.........................jdbc:postgresql://
	DEBUG ce[][c.z.h.HikariConfig] keepaliveTime...................0
	DEBUG ce[][c.z.h.HikariConfig] leakDetectionThreshold..........179000
	DEBUG ce[][c.z.h.HikariConfig] maxLifetime.....................180000
	DEBUG ce[][c.z.h.HikariConfig] maximumPoolSize.................200
	DEBUG ce[][c.z.h.HikariConfig] metricRegistry..................none
	DEBUG ce[][c.z.h.HikariConfig] metricsTrackerFactory...........none
	DEBUG ce[][c.z.h.HikariConfig] minimumIdle.....................10

It seems that it is caused by some analysis, but it is pretty difficult for us to find out which kind because there are a lot of them.

We sometime see theses errors in the logs:

	DEBUG ce[fd089a0c-fa5a-41dd-9a0a-e6c43c811e96][c.z.h.p.ProxyConnection] HikariPool-1 - Executed rollback on connection org.postgresql.jdbc.PgConnection@612f52f6 due to dirty commit state on close().
	DEBUG ce[fd089a0c-fa5a-41dd-9a0a-e6c43c811e96][c.z.h.pool.PoolBase] HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection@612f52f

Is there anything we could do to solve this issue or track it down?
Kind regards,

1 Like

Here are some logs where leaks were detected by Hikari :

2024.02.12 15:26:40 DEBUG ce[][c.z.h.p.HikariPool] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@18b35489
2024.02.12 15:26:40 DEBUG ce[][o.postgresql.Driver] Connecting with URL: jdbc:postgresql://
2024.02.12 15:26:40 DEBUG ce[][o.p.c.v.ConnectionFactoryImpl] Trying to establish a protocol version 3 connection to
2024.02.12 15:26:40 DEBUG ce[][o.p.c.v.ConnectionFactoryImpl] Receive Buffer Size is 65,536
2024.02.12 15:26:40 DEBUG ce[][o.p.c.v.ConnectionFactoryImpl] Send Buffer Size is 23,040
2024.02.12 15:26:40 DEBUG ce[][o.p.ssl.MakeSSL] converting regular socket connection to ssl
2024.02.12 15:28:10 DEBUG ce[2c7de3e6-7c57-4974-8f47-4b15077dee8b][c.z.h.p.ProxyConnection] HikariPool-1 - Executed rollback on connection org.postgresql.jdbc.PgConnection@18b35489 due to dirty commit state on close().
2024.02.12 15:28:10 DEBUG ce[2c7de3e6-7c57-4974-8f47-4b15077dee8b][c.z.h.pool.PoolBase] HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection@18b35489
2024.02.12 15:28:10 DEBUG ce[2c7de3e6-7c57-4974-8f47-4b15077dee8b][c.z.h.p.ProxyConnection] HikariPool-1 - Executed rollback on connection org.postgresql.jdbc.PgConnection@18b35489 due to dirty commit state on close().
2024.02.12 15:28:10 DEBUG ce[2c7de3e6-7c57-4974-8f47-4b15077dee8b][c.z.h.pool.PoolBase] HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection@18b35489
2024.02.12 15:31:53 WARN  ce[][c.z.h.p.ProxyLeakTask] Connection leak detection triggered for org.postgresql.jdbc.PgConnection@18b35489 on thread Worker 3 (UUID=cae45d9c-dec8-4fb8-b5fc-f05d67ab4a38) on ce-worker-1, stack trace follows
java.lang.Exception: Apparent connection leak detected
	at com.zaxxer.hikari.HikariDataSource.getConnection(
	at org.sonar.db.profiling.NullConnectionInterceptor.getConnection(
	at org.sonar.db.profiling.ProfiledDataSource.getConnection(
	at org.apache.ibatis.transaction.jdbc.JdbcTransaction.openConnection(
	at org.apache.ibatis.transaction.jdbc.JdbcTransaction.getConnection(
	at org.apache.ibatis.executor.BaseExecutor.getConnection(
	at org.apache.ibatis.executor.ReuseExecutor.prepareStatement(
	at org.apache.ibatis.executor.ReuseExecutor.doQueryCursor(
	at org.apache.ibatis.executor.BaseExecutor.queryCursor(
	at org.apache.ibatis.executor.CachingExecutor.queryCursor(
	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectCursor(
	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectCursor(
	at org.apache.ibatis.binding.MapperMethod.executeForCursor(
	at org.apache.ibatis.binding.MapperMethod.execute(
	at org.apache.ibatis.binding.MapperProxy$PlainMethodInvoker.invoke(
	at org.apache.ibatis.binding.MapperProxy.invoke(
	at jdk.proxy2/jdk.proxy2.$Proxy66.scrollIssuesForIndexation(Unknown Source)
	at org.sonar.db.issue.IssueDao.scrollIssuesForIndexation(
	at org.sonar.server.issue.index.IssueIteratorForSingleChunk.<init>(
	at org.sonar.server.issue.index.IssueIteratorForMultipleChunks.nextChunk(
	at org.sonar.server.issue.index.IssueIndexer.doIndex(
	at org.sonar.server.issue.index.IssueIndexer.indexOnAnalysis(
	at org.sonar.ce.task.projectanalysis.step.IndexAnalysisStep.execute(
	at org.sonar.ce.task.step.ComputationStepExecutor.executeStep(
	at org.sonar.ce.task.step.ComputationStepExecutor.executeSteps(
	at org.sonar.ce.task.step.ComputationStepExecutor.execute(
	at org.sonar.ce.task.projectanalysis.taskprocessor.ReportTaskProcessor.process(
	at org.sonar.ce.taskprocessor.CeWorkerImpl$ExecuteTask.executeTask(
	at org.sonar.ce.taskprocessor.CeWorkerImpl$
	at org.sonar.ce.taskprocessor.CeWorkerImpl.findAndProcessTask(
	at org.sonar.ce.taskprocessor.CeWorkerImpl$TrackRunningState.get(
	at java.base/java.util.concurrent.Executors$ Source)
	at java.base/ Source)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$ Source)
	at java.base/ Source)

Kind regards,

Good morning @lenaing.

Let me start with the obvious: did you perform the post upgrade steps?

I saw that you enabled leakDetectionThreshold. Are the logs you posted all the logs provided by Hikari, or is there something else?

Could you try these steps to trigger a reindex of your ES?

Hello @lenaing ,

We are also facing the same issue after upgrading to 10.4 from 10.3 in kubernetes. Have you managed to find any solution ?

we also experience this issue, but without Kubernetes.
since we upgrade last week we have those DB connection problems, do you have any solution ??

We also have same issue with MSSQL and 10.4 and it after after some days of upgrade not immediately

Hi Christophe,

Does this version 10.4 have some DB related issue ? After upgraded to this version DB CPU and memory shoot up and after few hrs we start getting error in Sonarqube pod DB connection timeout 8000ms.

However, we increased the DB( Azure flexible postgresql) storage and iops and restarte the pod. For the time being it start working again but we want know if there is some in this version.

Hi @Sumit_Singh1,

Thanks for your feedback.
I’m not aware of the need for more DB resources. I’ll share this with the team for visibility.


since the upgrade to we have problems with DB connection that we didn’t have in 10.2.
we changed the number of connection in postgresql.conf from 100 to 250 and it didn’t help
from error details:

Error querying database. Cause: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 8000ms.

Please advice,
thanks in advance,


We could reproduce this issue. We are using the SonarSource helm-chart with a postgresql pod running in the same namespace as the SonarQube StatefulSet.

The issue disappeared temporarily after a restart of the StatefulSet.

Hey all!

FYI, I’ve merged all the threads on this topic into one.


We are also experiencing the same issue after upgrading from 10.3 to 10.4. After few days of upgrade it started to break. Any suggestions, we are using MS Sql for the DB.

Hi all,

I am a colleague of lenaing .

After having tried to increase the maxActive connections, we have seen that it was not a good option. It was effective to prevent the “Connection is not available, request timed out after 8000ms” errors because we had more slots available before reaching the max, but it added so much load to the database when reaching a certain amount of open connections, that it was unsustainable.

We noticed that after 24 hours leaked connections were released/cleaned, and it was because of our database idle_in_transaction_session_timeout. So we decided to change it from 24h to 31 minutes (we have a maxLifetime connection configured to 30 minutes), and now everything is fine because the leaked connections are cleaned after 31 minutes.



that’s why you noticed increase usage in CPU and memory because you reach way more database connections than before due to the leak

1 Like

that’s why the issue disappears temporarily when rebooting, because all database connections are flushed.

Nevertheless, please note that database tuning is really just a workaround and not the real issue. :slight_smile:

  • It is true that we did not perform the Performing the post-upgrade steps | SonarQube Docs step (but we don’t see why a vacuum would do about remnant connections ?)
  • We did not notice other kind of Hikari logs.
  • SonarQube was restarted and, as it runs in a container, it reindex it’s ES on each launch.
1 Like

We are experiencing the same issue “Connection is not available, request timed out after 8000ms” since upgrading to SonarQube 10.4 (Enterprise Edition). We run SonarQube and PostgreSQL as containers based on the docker-compose configuration proposed in the SonarQube docs.
In Azure Pipelines it causes failed pipelines for us because the task “Publish Quality Gate Result” breaks (until SonarQube container is restarted).

1 Like


This is only a bandaid and not a fix for the real problem, but until we get that worked out…

Configure the Elasticsearch storage path to retain and reuse your indices between restarts.


1 Like


Thank to all of you for reporting this issue, you can follow the investigation here.
Could you show more details about the database connections? Like status (idle, active, in transaction), the source of these connections (ce, web), and their age/last activity if possible, thanks!