(Relocated from 'Create initial schema' is attempted twice - docker.io/sonarqube:lts-enterprise · Issue #221 · SonarSource/helm-chart-sonarqube · GitHub )
I’m setting up SonarQube 8.x LTS enterprise, on Azure Kubernetes Cluster v1.21.7, via Helm v3.7.1. Exact image is Docker Hub
I’m finding the sonarqube container stops early in the setup process, and then when it starts again, the database already contains content which causes the second run of the sonarqube container to fail. It reliably stops at the same place on first-startup every time I try it. I’ve ensured to remove the pvc since the ‘helm uninstall’ does not do that, before I do another fresh attempt.
I’ve tried using helm-chart-sonarqube/charts/sonarqube at master · SonarSource/helm-chart-sonarqube · GitHub and helm-chart-sonarqube/charts/sonarqube-lts at master · SonarSource/helm-chart-sonarqube · GitHub and run in to this issue with both flavours:
Is it normal for Stopping SonarQube
to happen after after Create initial schema
message?
2022.05.24 16:32:58 TRACE web[][o.s.s.p.PlatformImpl] Safemode configured | time=23ms
2022.05.24 16:32:58 INFO web[][o.s.s.p.d.m.AutoDbMigration] Automatically perform DB migration on fresh install
2022.05.24 16:32:58 TRACE web[][sql] time=0ms | sql=select version from schema_migrations
2022.05.24 16:32:58 INFO web[][DbMigrations] Executing DB migrations...
2022.05.24 16:32:58 INFO web[][DbMigrations] #1 'Create initial schema'...
2022.05.24 16:32:59 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:4
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.
2022.05.24 16:33:02 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0
2022.05.24 16:33:04 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:09 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing
2022.05.24 16:33:10 TRACE web[][sql] time=11946ms | sql=CREATE TABLE active_rule_parameters (id SERIAL NOT NULL,active_rule_id INTEGER NOT NULL,rules_parameter_id
2022.05.24 16:33:12 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:4
2022.05.24 16:33:12 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.
....
2022.05.24 16:36:08 TRACE web[][sql] time=5ms | sql=CREATE INDEX events_analysis ON events (analysis_uuid)
2022.05.24 16:36:08 TRACE web[][sql] time=5ms | sql=CREATE INDEX events_component_uuid ON events (component_uuid)
2022.05.24 16:36:08 TRACE web[][sql] time=103ms | sql=CREATE TABLE file_sources (id SERIAL NOT NULL,project_uuid VARCHAR (50) NOT NULL,file_uuid VARCHAR (50) NOT NULL,line_hashes TEXT NULL,line_hashes_version INTEGER NULL,data_hash VARCHAR (50) NULL,src_hash VARCHAR (50) NULL,revision VARCHAR (100) NULL,line_count INTEGER NOT NULL,binary_data BYTEA NULL,created_at BIGINT NOT NULL,updated_at BIGINT NOT NULL, CONSTRAINT pk_file_sources PRIMARY KEY (id))
2022.05.24 16:36:08 TRACE web[][sql] time=198ms | sql=CREATE UNIQUE INDEX file_sources_file_uuid ON file_sources (file_uuid)
2022.05.24 16:36:09 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:12 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:13 TRACE web[][sql] time=4734ms | sql=CREATE INDEX file_sources_project_uuid ON file_sources (project_uuid)
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] starting async refresh
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 33 [transport] [cluster:monitor/nodes/stats] []
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register child connection [local node connection] task [33]
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] sent to [{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}] (timeout: [15s])
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] received request
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 34 [transport] [indices:monitor/stats] []
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] register 35 [direct] [cluster:monitor/nodes/stats[n]] []
2022.05.24 16:36:14 TRACE es[][o.e.a.a.i.s.TransportIndicesStatsAction] resolving shards for [indices:monitor/stats] based on cluster state version [2]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 34
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] received indices stats response
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 35
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] sent response
2022.05.24 16:36:14 TRACE es[][o.e.t.T.tracer] [17][cluster:monitor/nodes/stats[n]] received response from [{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister child connection [local node connection] task [33]
2022.05.24 16:36:14 TRACE es[][o.e.t.TaskManager] unregister task for id: 33
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] received node stats response
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] node [vjMyuEn0Q2GWT26yZbcsdw]: most available: total: 123.8gb, available: 95.3gb / least available: total: 123.8gb, available: 95.3gb
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] stats all received, computing cluster info and notifying listeners
2022.05.24 16:36:14 TRACE es[][o.e.c.InternalClusterInfoService] notifying [org.elasticsearch.node.Node$$Lambda$2245/0x00000001007f4040@4b5c62b9] of new cluster info
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] processing new cluster info
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] no reroute required
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] no auto-release required
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] marking indices as read-only: [[]]
2022.05.24 16:36:14 TRACE es[][o.e.c.r.a.DiskThresholdMonitor] checkFinished
2022.05.24 16:36:14 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:19 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:20 TRACE web[][sql] time=7199ms | sql=CREATE INDEX file_sources_updated_at ON file_sources (updated_at)
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:22 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:24 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:29 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:30 TRACE web[][sql] time=6943ms | sql=CREATE TABLE group_roles (id SERIAL NOT NULL,organization_uuid VARCHAR (40) NOT NULL,group_id INTEGER NULL,resource_id INTEGER NULL,role VARCHAR (64) NOT NULL, CONSTRAINT pk_group_roles PRIMARY KEY (id))
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] ensureConnections: {{sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}=ConnectionTarget{discoveryNode={sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}}}
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] refreshing connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:32 TRACE es[][o.e.c.NodeConnectionsService] refreshed connection to {sonarqube}{vjMyuEn0Q2GWT26yZbcsdw}{m9b94u_3Rf2Ri-IyrCF32g}{127.0.0.1}{127.0.0.1:46595}{cdfhimrsw}{rack_id=sonarqube}
2022.05.24 16:36:34 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:39 TRACE web[][sql] time=6939ms | sql=CREATE UNIQUE INDEX uniq_group_roles ON group_roles (organization_uuid, group_id, resource_id, role)
2022.05.24 16:36:39 TRACE es[][o.e.i.IndexingMemoryController] total indexing heap bytes used [0b] vs indices.memory.index_buffer_size [204.7mb], currently writing bytes [0b]
2022.05.24 16:36:40 DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STARTING to STOPPING => true
2022.05.24 16:36:40 INFO app[][o.s.a.SchedulerImpl] Stopping SonarQube
2022.05.24 16:36:40 DEBUG app[][o.s.a.SchedulerImpl] Stopping [ce]...
2022.05.24 16:36:40 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] Shutdown Hook tryToMoveTo ce from INIT to STOPPING => false
2022.05.24 16:36:40 DEBUG app[][o.s.a.SchedulerImpl] Stopping [web]...
2022.05.24 16:36:40 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] Shutdown Hook tryToMoveTo web from STARTED to STOPPING => true
2022.05.24 16:36:40 TRACE web[][o.s.p.StopWatcher] Stop Watcher triggering stop command
2022.05.24 16:36:40 TRACE web[][o.s.p.Lifecycle] tryToMoveTo from STARTING to STOPPING => true
2022.05.24 16:36:40 INFO web[][o.s.p.ProcessEntryPoint] Gracefully stopping process
Since then a new pod is created, and this one attempts the same SQL changes but they don’t work a second time:
2022.05.24 16:09:10 INFO web[][o.s.s.p.ServerPluginManager] Deploy plugin Vulnerability Rules for PHP / 8.9.0.11439 / 04498e7b336a73db9508145d1e8cc44b9330528d
2022.05.24 16:09:10 INFO web[][o.s.s.p.ServerPluginManager] Deploy plugin Vulnerability Rules for Python / 8.9.0.11439 / 04498e7b336a73db9508145d1e8cc44b9330528d
2022.05.24 16:09:10 INFO web[][o.s.s.p.ServerPluginManager] Deploy plugin XML Code Quality and Security / 2.2.0.2973 / 16002945f0725643a7b42f090572795dd8b72a0f
2022.05.24 16:09:12 INFO web[][o.s.s.p.d.m.c.PostgresCharsetHandler] Verify that database charset supports UTF8
2022.05.24 16:09:12 INFO web[][o.s.s.p.d.m.AutoDbMigration] Automatically perform DB migration on fresh install
2022.05.24 16:09:12 INFO web[][DbMigrations] Executing DB migrations...
2022.05.24 16:09:12 INFO web[][DbMigrations] #1 'Create initial schema'...
2022.05.24 16:09:12 ERROR web[][DbMigrations] #1 'Create initial schema': failure | time=77ms
2022.05.24 16:09:12 ERROR web[][DbMigrations] Executed DB migrations: failure | time=80ms
2022.05.24 16:09:12 ERROR web[][o.s.s.p.PlatformImpl] Web server startup failed
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1 'Create initial schema' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
at com.google.common.collect.ImmutableList.forEach(ImmutableList.java:405)
.....
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Fail to execute CREATE TABLE active_rule_parameters (id SERIAL NOT NULL,active_rule_id INTEGER NOT NULL,rules_parameter_id INTEGER NOT NULL,value VARCHAR (4000) NULL,rules_parameter_key VARCHAR (128) NULL, CONSTRAINT pk_active_rule_parameters PRIMARY KEY (id))
Before the Stopping SonarQube
I get 15 of these in the kubenetes events:
0s Warning Unhealthy pod/sonarqube-sonarqube-lts-0 Startup probe failed: Get "http://10.0.32.240:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
which I think is OK, since:
startupProbe:
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 24
What might cause the DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STARTING to STOPPING => true
?