Sonarqube is not running because of the indexing error

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
    8.1.0.31237
  • what are you trying to achieve
    We are trying to bring the application up and seeing the below error. This is happening in our Production environment. Please help and suggest any solution.
    (AT BEGINNING OF FILE)
    2020.10.13 10:31:27 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory /amp/app/install/temp
    2020.10.13 10:31:27 INFO app[o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
    2020.10.13 10:31:27 INFO app[o.s.a.ProcessLauncherImpl] Launch process[[key=‘es’, ipcIndex=1, logFilenamePrefix=es]] from [/amp/app/install/elasticsearch]: /amp/app/install/elasticsearch/bin/elasticsearch
    2020.10.13 10:31:27 INFO app[o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    2020.10.13 10:31:27 INFO app[o.e.p.PluginsService] no modules loaded
    2020.10.13 10:31:27 INFO app[o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
    2020.10.13 10:31:32 INFO es[o.e.e.NodeEnvironment] using [1] data paths, mounts [[/amp/app/install/data (159.202.134.231:/vol_SONARQUBE/E3-Sonar/data)]], net usable_space [30.9gb], net total_space [65gb], types [nfs]
    2020.10.13 10:31:32 INFO es[o.e.e.NodeEnvironment] heap size [4.9gb], compressed ordinary object pointers [true]
    2020.10.13 10:31:33 INFO es[o.e.n.Node] node name [sonarqube], node ID [q3wT1FfPTKa_UXFjJk9aSg]
    2020.10.13 10:31:33 INFO es[o.e.n.Node] version[6.8.4], pid[30], build[default/tar/bca0c8d/2019-10-16T06:19:49.319352Z], OS[Linux/3.10.0-1127.18.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.6/11.0.6+10-LTS]
    2020.10.13 10:31:33 INFO es[o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/amp/app/install/temp, -XX:ErrorFile=…/logs/es_hs_err_pid%p.log, -Xmx5120m, -Xms5120m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/amp/app/install/elasticsearch, -Des.path.conf=/amp/app/install/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [analysis-common]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [lang-painless]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [mapper-extras]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [parent-join]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [percolator]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [reindex]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [repository-url]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] loaded module [transport-netty4]
    2020.10.13 10:31:33 INFO es[o.e.p.PluginsService] no plugins loaded
    2020.10.13 10:31:35 WARN es[o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
    2020.10.13 10:31:37 INFO es[o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings]
    2020.10.13 10:31:37 INFO es[o.e.n.Node] initialized
    2020.10.13 10:31:37 INFO es[o.e.n.Node] starting …
    2020.10.13 10:31:37 INFO es[o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
    2020.10.13 10:31:41 INFO es[o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{q3wT1FfPTKa_UXFjJk9aSg}{2SH5hCcJQBCOTxKc1oRaxA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}
    2020.10.13 10:31:41 INFO es[o.e.c.s.ClusterApplierService] new_master {sonarqube}{q3wT1FfPTKa_UXFjJk9aSg}{2SH5hCcJQBCOTxKc1oRaxA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{q3wT1FfPTKa_UXFjJk9aSg}{2SH5hCcJQBCOTxKc1oRaxA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
    2020.10.13 10:31:41 INFO es[o.e.n.Node] started
    2020.10.13 10:31:41 INFO es[o.e.g.GatewayService] recovered [7] indices into cluster_state
    2020.10.13 10:31:43 INFO es[o.e.m.j.JvmGcMonitorService] [gc][6] overhead, spent [418ms] collecting in the last [1s]
    2020.10.13 10:31:44 INFO es[o.e.m.j.JvmGcMonitorService] [gc][7] overhead, spent [374ms] collecting in the last [1s]
    2020.10.13 10:31:45 INFO es[o.e.m.j.JvmGcMonitorService] [gc][8] overhead, spent [433ms] collecting in the last [1s]
    2020.10.13 10:31:46 INFO es[o.e.m.j.JvmGcMonitorService] [gc][9] overhead, spent [416ms] collecting in the last [1s]
    2020.10.13 10:31:47 INFO es[o.e.m.j.JvmGcMonitorService] [gc][10] overhead, spent [480ms] collecting in the last [1s]
    2020.10.13 10:31:48 INFO es[o.e.m.j.JvmGcMonitorService] [gc][11] overhead, spent [492ms] collecting in the last [1s]
    2020.10.13 10:31:49 INFO es[o.e.m.j.JvmGcMonitorService] [gc][12] overhead, spent [489ms] collecting in the last [1s]
    2020.10.13 10:31:50 WARN es[o.e.m.j.JvmGcMonitorService] [gc][13] overhead, spent [526ms] collecting in the last [1s]
    2020.10.13 10:31:51 INFO es[o.e.m.j.JvmGcMonitorService] [gc][14] overhead, spent [502ms] collecting in the last [1s]
    2020.10.13 10:31:52 INFO es[o.e.m.j.JvmGcMonitorService] [gc][15] overhead, spent [481ms] collecting in the last [1s]
    2020.10.13 10:31:53 WARN es[o.e.m.j.JvmGcMonitorService] [gc][16] overhead, spent [524ms] collecting in the last [1s]
    2020.10.13 10:31:54 INFO es[o.e.m.j.JvmGcMonitorService] [gc][17] overhead, spent [501ms] collecting in the last [1s]
    2020.10.13 10:31:55 INFO es[o.e.m.j.JvmGcMonitorService] [gc][18] overhead, spent [460ms] collecting in the last [1s]
    2020.10.13 10:31:56 INFO es[o.e.m.j.JvmGcMonitorService] [gc][19] overhead, spent [445ms] collecting in the last [1s]
    2020.10.13 10:31:57 INFO es[o.e.m.j.JvmGcMonitorService] [gc][20] overhead, spent [457ms] collecting in the last [1s]
    2020.10.13 10:31:58 INFO es[o.e.m.j.JvmGcMonitorService] [gc][21] overhead, spent [491ms] collecting in the last [1s]
    2020.10.13 10:31:59 INFO es[o.e.m.j.JvmGcMonitorService] [gc][22] overhead, spent [452ms] collecting in the last [1s]
    2020.10.13 10:32:00 INFO es[o.e.m.j.JvmGcMonitorService] [gc][23] overhead, spent [477ms] collecting in the last [1s]
    2020.10.13 10:32:01 INFO es[o.e.m.j.JvmGcMonitorService] [gc][24] overhead, spent [470ms] collecting in the last [1s]
    2020.10.13 10:32:02 INFO es[o.e.m.j.JvmGcMonitorService] [gc][25] overhead, spent [455ms] collecting in the last [1s]
    2020.10.13 10:32:03 INFO es[o.e.m.j.JvmGcMonitorService] [gc][26] overhead, spent [477ms] collecting in the last [1s]
    2020.10.13 10:32:04 INFO es[o.e.m.j.JvmGcMonitorService] [gc][27] overhead, spent [491ms] collecting in the last [1s]
    2020.10.13 10:32:05 INFO es[o.e.m.j.JvmGcMonitorService] [gc][28] overhead, spent [462ms] collecting in the last [1s]
    2020.10.13 10:32:06 INFO es[o.e.m.j.JvmGcMonitorService] [gc][29] overhead, spent [495ms] collecting in the last [1s]
    2020.10.13 10:32:07 INFO es[o.e.m.j.JvmGcMonitorService] [gc][30] overhead, spent [404ms] collecting in the last [1s]
    2020.10.13 10:32:08 INFO es[o.e.m.j.JvmGcMonitorService] [gc][31] overhead, spent [490ms] collecting in the last [1s]
    2020.10.13 10:32:09 WARN es[o.e.m.j.JvmGcMonitorService] [gc][32] overhead, spent [519ms] collecting in the last [1s]
    2020.10.13 10:32:10 INFO es[o.e.m.j.JvmGcMonitorService] [gc][33] overhead, spent [465ms] collecting in the last [1s]
    2020.10.13 10:32:11 INFO es[o.e.m.j.JvmGcMonitorService] [gc][34] overhead, spent [399ms] collecting in the last [1s]
    2020.10.13 10:32:12 INFO es[o.e.m.j.JvmGcMonitorService] [gc][35] overhead, spent [381ms] collecting in the last [1s]
    2020.10.13 10:32:13 INFO es[o.e.m.j.JvmGcMonitorService] [gc][36] overhead, spent [457ms] collecting in the last [1s]
    2020.10.13 10:32:15 INFO es[o.e.m.j.JvmGcMonitorService] [gc][37] overhead, spent [517ms] collecting in the last [1s]
    2020.10.13 10:32:16 INFO es[o.e.m.j.JvmGcMonitorService] [gc][38] overhead, spent [388ms] collecting in the last [1s]
    2020.10.13 10:32:17 INFO es[o.e.m.j.JvmGcMonitorService] [gc][39] overhead, spent [429ms] collecting in the last [1s]
    2020.10.13 10:32:18 INFO es[o.e.m.j.JvmGcMonitorService] [gc][40] overhead, spent [426ms] collecting in the last [1s]
    2020.10.13 10:32:19 INFO es[o.e.m.j.JvmGcMonitorService] [gc][41] overhead, spent [505ms] collecting in the last [1s]
    2020.10.13 10:32:20 INFO es[o.e.m.j.JvmGcMonitorService] [gc][42] overhead, spent [458ms] collecting in the last [1s]
    2020.10.13 10:32:21 INFO es[o.e.m.j.JvmGcMonitorService] [gc][43] overhead, spent [462ms] collecting in the last [1s]
    2020.10.13 10:32:22 INFO es[o.e.m.j.JvmGcMonitorService] [gc][44] overhead, spent [452ms] collecting in the last [1s]
    2020.10.13 10:32:23 INFO es[o.e.m.j.JvmGcMonitorService] [gc][45] overhead, spent [451ms] collecting in the last [1s]
    2020.10.13 10:32:24 INFO es[o.e.m.j.JvmGcMonitorService] [gc][46] overhead, spent [431ms] collecting in the last [1s]
    2020.10.13 10:32:25 INFO es[o.e.m.j.JvmGcMonitorService] [gc][47] overhead, spent [446ms] collecting in the last [1s]
    2020.10.13 10:32:26 INFO es[o.e.m.j.JvmGcMonitorService] [gc][48] overhead, spent [443ms] collecting in the last [1s]
    2020.10.13 10:32:26 INFO es[o.e.i.IndexingMemoryController] now throttling indexing for shard [[components][1]]: segment writing can’t keep up
  • what have you tried so far to achieve this
    we have restarted the application many times but it is not coming up because of this issue.

Hi, you need to increase the allowed memory for the elasticsearch process. Search in your sonar.properties configuration file for the property sonar.search.javaOpts, enable the line and try to double the default value & restart SonarQube. That should be a lot better. If you still observe slow gc on the es process, double the value again.

sonar.search.javaOpts=-Xmx1024m -Xms1024m -XX:+HeapDumpOnOutOfMemoryError