We are using Sonarqube 9.4.0-developer, deployed on Kubernetes 1.22 using this (kind of old I know) helm chart: charts/charts/sonarqube at master · Oteemo/charts · GitHub in it’s version v9.10.3
The postgresql server is deployed outside of this chart and we specify a jdbcUrlOverride like that: jdbc:postgresql://postgres.svc.cluster.local/sonar?socketTimeout=1500
We planned an upgrade. I think we specified a socketTimeout because it was suggested in the default values of the Helm chart we used at the time (charts/values.yaml at master · Oteemo/charts · GitHub), I’ll try to remove it as well during the upgrade.
I’ll let you know if that solves the issue.
Thank you for the advice!
Please let us know if it changes the rollback behavior!
I think it might – as if the socket times out and the connection is closed SonarQube might determine the session is “dirty” and rollback so that no statement leaks from one use of the DbSession to another. This is my non-expert opinion looking into our codebase
1500 might seem like a lot, but as the default for the PostgreSQL JDBC Driver is infinite… it could have an impact.
#----- PostgreSQL 9.3 or greater
# By default the schema named "public" is used. It can be overridden with the parameter "currentSchema".
#sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube?currentSchema=my_schema
About the socketTimeout, we are going to investigate a bit more and check if it’s actually useful, or just legacy from Oteemo chart that we can get ride of.
About the rollback, I checked on our internal SQ instance, and I observe the same behavior: about 10% of the transactions are rollback. So this seems to be common, and I think there is nothing to worry about on your side. We have an automatic rollback in our DB session mechanism to ensure that commits were not forgotten in the code. I’ll spend a bit of time diving into what I observed in our instance locally, just to make sure this is not hiding anything suspicious.
Hey, I ran an investigation and confirm that all rollbacks I observed are expected and harmless.
Still, I pinpointed that many rollbacks are generated by a query ran by Compute Engine workers every two seconds (multiplied by the number of workers you have). To limit the noise, I opened a ticket to change a bit the approach and reduce the number of rollback on the database from this mechanism. That way, monitoring rollback % would be meaningful again for DBA, as the expected rollback % will be way lower.
To close this thread for good, I wanted to mention that we got good results on our internal instance with the fix deployed. This change will be available with SonarQube 10.2, scheduled for a release next week.