InternalClusterInfoService failed to retrieve shard stats

Must-share information (formatted with Markdown):

  • Installed using helm. Version is 9.7.1 (chart used - sonarqube 6.0.1+425 · sonarsource/sonarqube)
  • Suddenly Sonarqube is not working due to indices i think. Open for suggestions.
  • I tried increasing initialDelaySeconds for readiness and livelinessprobe.
  • Tried changing pvc but didn’t worked.

InternalClusterInfoService] failed to retrieve shard stats from node [NXpXTj3UQYqx1f98n2oN5g]: [sona

Logs:

app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2023.08.29 14:43:03 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2023.08.29 14:43:11 INFO  es[][o.e.n.Node] version[7.17.5], pid[25], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.4.226-129.415.amzn2.x86_64/amd64], JVM[Alpine/OpenJDK 64-Bit Server VM/11.0.15/11.0.15+10-alpine-r0]
2023.08.29 14:43:11 INFO  es[][o.e.n.Node] JVM home [/usr/lib/jvm/java-11-openjdk]
2023.08.29 14:43:11 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2023.08.29 14:49:54 WARN  es[][o.e.t.TransportService] Received response for a request that has timed out, sent [24.6s/24618ms] ago, timed out [9.6s/9605ms] ago, action [indices:monitor/stats[n]], node [{sonarqube}{NXpXTj3UQYqx1f98n2oN5g}{lxF2kH4NSba78baMxdlD_w}{127.0.0.1}{127.0.0.1:39363}{cdfhimrsw}{rack_id=sonarqube}], id [265]
2023.08.29 14:50:39 WARN  es[][o.e.c.InternalClusterInfoService] failed to retrieve stats for node [NXpXTj3UQYqx1f98n2oN5g]: [sonarqube][127.0.0.1:39363][cluster:monitor/nodes/stats[n]] request_id [290] timed out after [15009ms]
2023.08.29 14:50:39 WARN  es[][o.e.c.InternalClusterInfoService] failed to retrieve shard stats from node [NXpXTj3UQYqx1f98n2oN5g]: [sonarqube][127.0.0.1:39363][indices:monitor/stats[n]] request_id [291] timed out after [15009ms]
2023.08.29 14:51:12 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2023.08.29 14:51:16 INFO  app[][o.s.a.SchedulerImpl] Sonarqube has been requested to stop
2023.08.29 14:51:16 INFO  app[][o.s.a.SchedulerImpl] Stopping [Compute Engine] process...
2023.08.29 14:51:16 INFO  app[][o.s.a.SchedulerImpl] Stopping [Web Server] process...
2023.08.29 14:51:16 WARN  es[][o.e.t.TransportService] Received response for a request that has timed out, sent [52.4s/52436ms] ago, timed out [37.4s/37427ms] ago, action [cluster:monitor/nodes/stats[n]], node [{sonarqube}{NXpXTj3UQYqx1f98n2oN5g}{lxF2kH4NSba78baMxdlD_w}{127.0.0.1}{127.0.0.1:39363}{cdfhimrsw}{rack_id=sonarqube}], id [290]
2023.08.29 14:51:16 WARN  es[][o.e.t.TransportService] Received response for a request that has timed out, sent [52.4s/52436ms] ago, timed out [37.4s/37427ms] ago, action [indices:monitor/stats[n]], node [{sonarqube}{NXpXTj3UQYqx1f98n2oN5g}{lxF2kH4NSba78baMxdlD_w}{127.0.0.1}{127.0.0.1:39363}{cdfhimrsw}{rack_id=sonarqube}], id [291]
2023.08.29 14:51:16 INFO  web[][o.s.p.ProcessEntryPoint] Gracefully stopping process

Please assist!

No Support from the community…
But solved the problem!!

Hi,

Welcome to the community!

I’m glad you worked through this.

You may want to review the FAQ, particularly this section.

 
Ann