Hi @Colin.
Although it does make me wonder if somehow, around the time of the incident you’re describing, a second container was started somehow targeting the same database.
Unfortunately, I cannot rule that out. Resolving this server freeze issue took quite a while. During that time the VM was still somehow running but was not reachable at all.
Have you experienced the lost CE tasks since the 2 hiccups earlier this week (i.e. – is the issue still recurring?)
No, these two hiccups regarding lost CE tasks were the only ones.The last one on Friday 2022-11-25.
I have temporarily enabled debug log level for the instance[…]
I was about to close this topic right before I realized that the debug log level wasn’t active anymore. It should’ve been since there was no planned SonarQube restart since then. Turns out SonarQube had another restart loop last Saturday (2022-11-26) due to a network configuration that destabilized our Kubernetes cluster connectivity and lead to SonarQube being restarted. I really need to improve my monitoring on that regards. I was able to save the latest container stop logs and it looks like Elasticsearch didn’t liked those restarts either. 6 restarts later, when that network issue was fully resolved, SonarQube started successfully. Below you can find the stacktrace from the last container stop. Do you think we should stop SonarQube gracefully, clean up Elasticsearch data and start the application again?
Latest restart logs
2022.11.26 10:18:04 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2022.11.26 10:18:04 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:38635]
2022.11.26 10:18:04 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2022.11.26 10:18:04 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2022.11.26 10:18:05 INFO es[][o.e.n.Node] version[7.17.5], pid[31], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.10.0-12-amd64/amd64], JVM[Alpine/OpenJDK 64-Bit Server VM/11.0.15/11.0.15+10-alpine-r0]
2022.11.26 10:18:05 INFO es[][o.e.n.Node] JVM home [/usr/lib/jvm/java-11-openjdk]
2022.11.26 10:18:05 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx2G, -Xms2G, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] loaded module [reindex]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2022.11.26 10:18:05 INFO es[][o.e.p.PluginsService] no plugins loaded
2022.11.26 10:18:05 ERROR es[][o.e.b.ElasticsearchUncaughtExceptionHandler] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:173) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.5.jar:7.17.5]
at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.5.jar:7.17.5]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:328) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.node.Node.<init>(Node.java:429) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.node.Node.<init>(Node.java:309) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.5.jar:7.17.5]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.5.jar:7.17.5]
... 6 more
uncaught exception in thread [main]
java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:328)
at org.elasticsearch.node.Node.<init>(Node.java:429)
at org.elasticsearch.node.Node.<init>(Node.java:309)
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
at org.elasticsearch.cli.Command.main(Command.java:77)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
For complete error details, refer to the log at /opt/sonarqube/logs/sonarqube.log
2022.11.26 10:18:05 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
2022.11.26 10:18:05 INFO app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2022.11.26 10:18:05 ERROR app[][o.s.a.p.EsManagedProcess] Failed to check status
org.elasticsearch.ElasticsearchException: java.lang.InterruptedException
at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2695)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2171)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2137)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2105)
at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:151)
at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:64)
at org.sonar.application.process.EsManagedProcess.checkStatus(EsManagedProcess.java:92)
at org.sonar.application.process.EsManagedProcess.checkOperational(EsManagedProcess.java:84)
at org.sonar.application.process.EsManagedProcess.isOperational(EsManagedProcess.java:62)
at org.sonar.application.process.ManagedProcessHandler.refreshState(ManagedProcessHandler.java:223)
at org.sonar.application.process.ManagedProcessHandler$EventWatcher.run(ManagedProcessHandler.java:288)
Caused by: java.lang.InterruptedException: null
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1040)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:243)
at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:75)
at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2692)
... 10 common frames omitted
2022.11.26 10:18:05 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
We don’t use the official Helm Chart yet. Currently it’s a bare Deployment with database and SonarQube wrapped inside one pod, running for a few years now. We did the sysctl
adjustments on the node manually.
Best regards,
Steven