'Create initial schema' is attempted twice - docker.io/sonarqube:lts-enterprise

(Relocated from 'Create initial schema' is attempted twice - docker.io/sonarqube:lts-enterprise · Issue #221 · SonarSource/helm-chart-sonarqube · GitHub )

I’m setting up SonarQube 8.x LTS enterprise, on Azure Kubernetes Cluster v1.21.7, via Helm v3.7.1. Exact image is Docker Hub

I’m finding the sonarqube container stops early in the setup process, and then when it starts again, the database already contains content which causes the second run of the sonarqube container to fail. It reliably stops at the same place on first-startup every time I try it. I’ve ensured to remove the pvc since the ‘helm uninstall’ does not do that, before I do another fresh attempt.

I’ve tried using helm-chart-sonarqube/charts/sonarqube at master · SonarSource/helm-chart-sonarqube · GitHub and helm-chart-sonarqube/charts/sonarqube-lts at master · SonarSource/helm-chart-sonarqube · GitHub and run in to this issue with both flavours:

Is it normal for Stopping SonarQube to happen after after Create initial schema message?

2022.05.24 16:32:58 TRACE web[][o.s.s.p.PlatformImpl] Safemode configured | time=23ms
2022.05.24 16:32:58 INFO  web[][o.s.s.p.d.m.AutoDbMigration] Automatically perform DB migration on fresh install
2022.05.24 16:32:58 TRACE web[][sql] time=0ms | sql=select version from schema_migrations
2022.05.24 16:32:58 INFO  web[][DbMigrations] Executing DB migrations...
2022.05.24 16:32:58 INFO  web[][DbMigrations] #1 'Create initial schema'...
2022.05.24 16:32:59 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:4
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0
2022.05.24 16:33:04 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:09 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:10 TRACE web[][sql] time=11946ms | sql=CREATE TABLE active_rule_parameters (id SERIAL NOT NULL,active_rule_id INTEGER NOT NULL,rules_parameter_id 
2022.05.24 16:33:12 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:4
2022.05.24 16:33:12 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.
....
2022.05.24 16:36:08 TRACE web[][sql] time=5ms | sql=CREATE INDEX events_analysis ON events (analysis_uuid)
2022.05.24 16:36:08 TRACE web[][sql] time=5ms | sql=CREATE INDEX events_component_uuid ON events (component_uuid)
2022.05.24 16:36:08 TRACE web[][sql] time=103ms | sql=CREATE TABLE file_sources (id SERIAL NOT NULL,project_uuid VARCHAR (50) NOT NULL,file_uuid VARCHAR (50) NOT NULL,line_hashes TEXT NULL,line_hashes_version INTEGER NULL,data_hash VARCHAR (50) NULL,src_hash VARCHAR (50) NULL,revision VARCHAR (100) NULL,line_count INTEGER NOT NULL,binary_data BYTEA NULL,created_at BIGINT NOT NULL,updated_at BIGINT NOT NULL, CONSTRAINT pk_file_sources PRIMARY KEY (id))
2022.05.24 16:36:08 TRACE web[][sql] time=198ms | sql=CREATE UNIQUE INDEX file_sources_file_uuid ON file_sources (file_uuid)
2022.05.24 16:36:09 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:13 TRACE web[][sql] time=4734ms | sql=CREATE INDEX file_sources_project_uuid ON file_sources (project_uuid)
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] starting async refresh
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 33 [transport] [cluster:monitor/nodes/stats] []
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register child connection [local node connection] task [33]
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] sent to [{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}] (timeout: [15s])
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] received request
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 34 [transport] [indices:monitor/stats] []
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 35 [direct] [cluster:monitor/nodes/stats[n]] []
2022.05.24 16:36:14 TRACE es[][o.e.a.a.i.s.TransportIndicesStatsAction] resolving shards for [indices:monitor/stats] based on cluster state version [2]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 34
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] received indices stats response
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 35
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] sent response
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] received response from [{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister child connection [local node connection] task [33]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 33
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] received node stats response
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] node [vjMyuEn0Q2GWT26yZbcsdw]: most available: total: 123.8gb, available: 95.3gb / least available: total: 123.8gb, available: 95.3gb
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] stats all received, computing cluster info and notifying listeners
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] notifying [org.elasticsearch.node.Node$$Lambda$2245/0x00000001007f4040@4b5c62b9] of new cluster info
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] processing new cluster info
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] no reroute required
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] no auto-release required
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] marking indices as read-only: [[]]
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] checkFinished
2022.05.24 16:36:14 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:19 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:20 TRACE web[][sql] time=7199ms | sql=CREATE INDEX file_sources_updated_at ON file_sources (updated_at)
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:24 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:29 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:30 TRACE web[][sql] time=6943ms | sql=CREATE TABLE group_roles (id SERIAL NOT NULL,organization_uuid VARCHAR (40) NOT NULL,group_id INTEGER NULL,resource_id INTEGER NULL,role VARCHAR (64) NOT NULL, CONSTRAINT pk_group_roles PRIMARY KEY (id))
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:34 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:39 TRACE web[][sql] time=6939ms | sql=CREATE UNIQUE INDEX uniq_group_roles ON group_roles (organization_uuid, group_id, resource_id, role)
2022.05.24 16:36:39 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:40 DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STARTING to STOPPING => true
2022.05.24 16:36:40 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2022.05.24 16:36:40 DEBUG app[][o.s.a.SchedulerImpl] Stopping [ce]...
2022.05.24 16:36:40 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] Shutdown Hook tryToMoveTo ce from INIT to STOPPING => false
2022.05.24 16:36:40 DEBUG app[][o.s.a.SchedulerImpl] Stopping [web]...
2022.05.24 16:36:40 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] Shutdown Hook tryToMoveTo web from STARTED to STOPPING => true
2022.05.24 16:36:40 TRACE web[][o.s.p.StopWatcher] Stop Watcher triggering stop command
2022.05.24 16:36:40 TRACE web[][o.s.p.Lifecycle] tryToMoveTo from STARTING to STOPPING => true
2022.05.24 16:36:40 INFO  web[][o.s.p.ProcessEntryPoint] Gracefully stopping process

Since then a new pod is created, and this one attempts the same SQL changes but they don’t work a second time:

2022.05.24 16:09:10 INFO  web[][o.s.s.p.ServerPluginManager] Deploy plugin Vulnerability Rules for PHP / 8.9.0.11439 / 04498e7b336a73db9508145d1e8cc44b9330528d
2022.05.24 16:09:10 INFO  web[][o.s.s.p.ServerPluginManager] Deploy plugin Vulnerability Rules for Python / 8.9.0.11439 / 04498e7b336a73db9508145d1e8cc44b9330528d
2022.05.24 16:09:10 INFO  web[][o.s.s.p.ServerPluginManager] Deploy plugin XML Code Quality and Security / 2.2.0.2973 / 16002945f0725643a7b42f090572795dd8b72a0f
2022.05.24 16:09:12 INFO  web[][o.s.s.p.d.m.c.PostgresCharsetHandler] Verify that database charset supports UTF8
2022.05.24 16:09:12 INFO  web[][o.s.s.p.d.m.AutoDbMigration] Automatically perform DB migration on fresh install
2022.05.24 16:09:12 INFO  web[][DbMigrations] Executing DB migrations...
2022.05.24 16:09:12 INFO  web[][DbMigrations] #1 'Create initial schema'...
2022.05.24 16:09:12 ERROR web[][DbMigrations] #1 'Create initial schema': failure | time=77ms
2022.05.24 16:09:12 ERROR web[][DbMigrations] Executed DB migrations: failure | time=80ms
2022.05.24 16:09:12 ERROR web[][o.s.s.p.PlatformImpl] Web server startup failed
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1 'Create initial schema' failed
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
	at com.google.common.collect.ImmutableList.forEach(ImmutableList.java:405)
.....
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Fail to execute CREATE TABLE active_rule_parameters (id SERIAL NOT NULL,active_rule_id INTEGER NOT NULL,rules_parameter_id INTEGER NOT NULL,value VARCHAR (4000) NULL,rules_parameter_key VARCHAR (128) NULL, CONSTRAINT pk_active_rule_parameters PRIMARY KEY (id))

Before the Stopping SonarQube I get 15 of these in the kubenetes events:

0s          Warning   Unhealthy                pod/sonarqube-sonarqube-lts-0                       Startup probe failed: Get "http://10.0.32.240:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

which I think is OK, since:

startupProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
  failureThreshold: 24

What might cause the DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STARTING to STOPPING => true ?

Hello @Martin3254

Could you please provide more information on the helm command that you used to run sonarqube, the relevant information from values.yaml and the database vendor that is being used?

I’ve tried this with the helm-chart-sonarqube/values.yaml at master · SonarSource/helm-chart-sonarqube · GitHub file totally unchanged, and also with it changed to use our own registry (but the images themselves are the same images), and also with a values.yml which contains only our image names so that it can use our own registry.

helm -n sonarqube install sonarqube . -f values.yaml

The . is helm-chart-sonarqube/charts/sonarqube-lts at master · SonarSource/helm-chart-sonarqube · GitHub , with a ‘charts’ subdirectory created and populated with the two dependency charts.

The database is postgresql as per helm-chart-sonarqube/requirements.yaml at master · SonarSource/helm-chart-sonarqube · GitHub , which is 11.7.0 from image docker.io/bitnami/postgresql:11.7.0-debian-10-r26

I tried and It worked perfectly for me with lts-enterprise
As you mentioned, it is not supposed to stop after creating initial schema
From the message :
2022.05.24 16:36:40 DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STARTING to STOPPING => true
I suspect that your kubernetes cluster (or something else) has sent a SIGTERM/SIGINT signal to sonarqube, as the shutdown hook mentioned in the logs is only used to bind to external signal (see Code here)
So you may look into your setup to check if anything is causing SQ to shutdown (maybe kubernetes event is a good start)

Thanks Leo, useful to know it’s not normal behaviour to exit at that point.

I’ve not seen in the kubernetes events a killing message, but I’ve significantly extended the startup probe duration, and now saw:

2022.05.25 16:40:17 INFO  web[][DbMigrations] #1 'Create initial schema'...
2022.05.25 16:49:29 INFO  web[][DbMigrations] #1 'Create initial schema': success | time=552205ms
2022.05.25 16:49:29 INFO  web[][DbMigrations] #2 'Populate initial schema'...

What’s a normal duration for “Create initial schema”? I expect 10 minutes suggests something is quite wrong with the performance of the postgresql pod. (I don’t manage to find example startup logs in public web searches).

I’m turning on postgresql statement logging via values.yaml:

postgresql:
  # Enable to deploy the PostgreSQL chart
  enabled: true

  postgresqlExtendedConf:
    logMinDurationStatement: 0
    logStatement: all

and also moved the postgresql data volume to azure kubernetes service ‘managed-premium’ storageclass. Now:

2022.05.25 17:01:43 INFO  web[][o.s.s.p.d.m.AutoDbMigration] Automatically perform DB migration on fresh install
2022.05.25 17:01:43 INFO  web[][DbMigrations] Executing DB migrations...
2022.05.25 17:01:43 INFO  web[][DbMigrations] #1 'Create initial schema'...
2022.05.25 17:02:23 INFO  web[][DbMigrations] #1 'Create initial schema': success | time=39625ms
2022.05.25 17:02:23 INFO  web[][DbMigrations] #2 'Populate initial schema'...
2022.05.25 17:02:28 INFO  web[][DbMigrations] #2 'Populate initial schema': success | time=5504ms
2022.05.25 17:02:28 INFO  web[][DbMigrations] #3000 'Set Organizations#guarded column nullable'...
2022.05.25 17:02:28 INFO  web[][DbMigrations] #3000 'Set Organizations#guarded column nullable': success | time=99ms
2022.05.25 17:02:28 INFO  web[][DbMigrations] #3001 'Create ProjectQualityGates table'...
2022.05.25 17:02:29 INFO  web[][DbMigrations] #3001 'Create ProjectQualityGates table': success | time=445ms
2022.05.25 17:02:29 INFO  web[][DbMigrations] #3002 'Make index on DEPRECATED_RULE_KEYS.RULE_ID non unique'...
2022.05.25 17:02:29 INFO  web[][DbMigrations] #3002 'Make index on DEPRECATED_RULE_KEYS.RULE_ID non unique': success | time=303ms
2022.05.25 17:02:29 INFO  web[][DbMigrations] #3003 'Populate ProjectQualityGate table from Properties table'...
2022.05.25 17:02:29 INFO  web[][DbMigrations] #3003 'Populate ProjectQualityGate table from Properties table': success | time=59ms

Which is looking much better.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.