java.lang.OutOfMemoryError while starting , disk space usage

Hi,

Do I need to add disc or memory? I am using SQ 5.6.5

2018.08.31 11:24:22 INFO   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][362][56] duration [7.5s], collections [1]/[7.7s], total [7.5s]/[7.7m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [13.9mb]->[13.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:24:35 WARN   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][363][57] duration [13.2s], collections [1]/[13.2s], total [13.2s]/[7.9m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [13.6mb]->[15.5mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:24:35 WARN   es[o.e.c.r.a.decider]  [sonar-1535706608232] high disk watermark [90%] exceeded on [sdpAHjhhSGerGk0ypBKqLQ][sonar-1535706608232] free: 0b[0%], shards will be relocated away from this node
2018.08.31 11:24:43 INFO   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][364][58] duration [7.5s], collections [1]/[7.6s], total [7.5s]/[8m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [15.5mb]->[15.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:24:56 WARN   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][365][59] duration [12.9s], collections [1]/[13s], total [12.9s]/[8.2m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [15.6mb]->[15mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid1249.hprof ...
Dump file is incomplete: No space left on device
2018.08.31 11:25:41 INFO   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][366][62] duration [28.5s], collections [3]/[7.7s], total [28.5s]/[8.7m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [15mb]->[14.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:25:41 WARN   es[o.e.c.r.a.decider]  [sonar-1535706608232] high disk watermark [90%] exceeded on [sdpAHjhhSGerGk0ypBKqLQ][sonar-1535706608232] free: 0b[0%], shards will be relocated away from this node
2018.08.31 11:25:41 INFO   es[o.e.c.r.a.decider]  [sonar-1535706608232] high disk watermark exceeded on one or more nodes, rerouting shards
2018.08.31 11:25:54 WARN   es[o.e.index.engine]  [sonar-1535706608232] [tests][2] failed to sync translog
2018.08.31 11:26:02 WARN   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][367][64] duration [21.6s], collections [2]/[58.5s], total [21.6s]/[9.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [14.6mb]->[5.7mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:26:16 WARN   es[o.e.monitor.jvm]  [sonar-1535706608232] [gc][old][368][65] duration [13s], collections [1]/[13.2s], total [13s]/[9.3m], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [133.1mb]->[356.6kb]/[133.1mb]}{[survivor] [5.7mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.08.31 11:26:16 WARN   es[o.e.c.r.a.decider]  [sonar-1535706608232] high disk watermark [90%] exceeded on [sdpAHjhhSGerGk0ypBKqLQ][sonar-1535706608232] free: 0b[0%], shards will be relocated away from this node
2018.08.31 11:26:16 WARN   es[o.e.indices.cluster]  [sonar-1535706608232] [[tests][2]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [tests][2] failed to recover shard
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:297) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112) ~[elasticsearch-1.7.5.jar:na]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_171]
        at java.lang.Thread.run(Thread.java:748) [na:1.8.0_171]
Caused by: org.elasticsearch.index.translog.TranslogException: [tests][2] Failed to write operation [org.elasticsearch.index.translog.Translog$Delete@342c5b31]
        at org.elasticsearch.index.translog.fs.FsTranslog.add(FsTranslog.java:398) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.engine.InternalEngine.innerDelete(InternalEngine.java:513) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.engine.InternalEngine.delete(InternalEngine.java:457) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.shard.IndexShard.performRecoveryOperation(IndexShard.java:946) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:278) ~[elasticsearch-1.7.5.jar:na]
        ... 4 common frames omitted
Caused by: java.io.IOException: No space left on device
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_171]
        at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) ~[na:1.8.0_171]
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_171]
        at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_171]
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[na:1.8.0_171]
        at org.elasticsearch.common.io.Channels.writeToChannel(Channels.java:193) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.translog.fs.BufferingFsTranslogFile.flushBuffer(BufferingFsTranslogFile.java:116) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.translog.fs.BufferingFsTranslogFile.add(BufferingFsTranslogFile.java:101) ~[elasticsearch-1.7.5.jar:na]
        at org.elasticsearch.index.translog.fs.FsTranslog.add(FsTranslog.java:379) ~[elasticsearch-1.7.5.jar:na]
        ... 8 common frames omitted
2018.08.31 11:26:17 WARN   es[o.e.c.action.shard]  [sonar-1535706608232] [tests][2] received shard failed for [tests][2], node[sdpAHjhhSGerGk0ypBKqLQ], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2018-08-31T09:10:24.844Z]], indexUUID [br1QbpUdQC-mwpecn9SFCQ], reason [shard failure [failed recovery][IndexShardGatewayRecoveryException[[tests][2] failed to recover shard]; nested: TranslogException[[tests][2] Failed to write operation [org.elasticsearch.index.translog.Translog$Delete@342c5b31]]; nested: IOException[No space left on device]; ]]
@                                                                        

I am not sure how to interpret this. What do I need to do?

br,

//mikael

Hi mikael,

Here’s your problem:

Your disk is full. You should provide more space or clean up what you’ve got.

Ann

Hi,
Ok. Then which files from sonar can I safely remove without adventuring the stability?

br.

//mikael

Except if you delete some projects with huge amount of information (big number of files, many past analyses), no files can be deleted from SonarQube installation in order to reduce disk consumption. You should review your hardware infrastructure.

Regards

We increased disc and memory. We have rebooted. I attach the log and it seems that the webserver is not starting.sonar.txt (355.4 KB)
I cannot see anything saying why it cannot start. Do you ?

br,

//mikael

Hi,

The sonar.log is only showing part of the picture. I suggest you go through this Troubleshooting Note to learn more on how to diagnose this and which logs to browse. Given your initial hardware limitation, it seems that it’s only fair to know monitor what is the amount/state of local resources (cpu, ram, io) as SonarQube starts up.

I have no error message anymore. I have sonar.log and access.log. The othjers I don’t have.

This is the content of sonar.log:

2018.09.04 12:24:25 INFO  app[o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonar/temp
2018.09.04 12:24:25 INFO  app[o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx2G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonar/temp/sq-process7324743968733895145properties
2018.09.04 12:24:28 INFO   es[o.s.p.ProcessEntryPoint]  Starting es
2018.09.04 12:24:28 INFO   es[o.s.s.EsSettings]  Elasticsearch listening on 127.0.0.1:9001
2018.09.04 12:24:29 INFO   es[o.elasticsearch.node]  [sonar-1536056664911] version[1.7.5], pid[1276], build[00f95f4/2016-02-02T09:55:30Z]
2018.09.04 12:24:29 INFO   es[o.elasticsearch.node]  [sonar-1536056664911] initializing ...
2018.09.04 12:24:30 INFO   es[o.e.plugins]  [sonar-1536056664911] loaded [], sites []
2018.09.04 12:24:30 INFO   es[o.elasticsearch.env]  [sonar-1536056664911] using [1] data paths, mounts [[/ (/dev/mapper/seliius01837--vg-root)]], net usable_space [18.4gb], net total_space [24.9gb], types [ext4]
2018.09.04 12:24:33 WARN   es[o.e.bootstrap]  JNA not found. native methods will be disabled.
2018.09.04 12:24:35 INFO   es[o.elasticsearch.node]  [sonar-1536056664911] initialized
2018.09.04 12:24:35 INFO   es[o.elasticsearch.node]  [sonar-1536056664911] starting ...
2018.09.04 12:24:35 INFO   es[o.e.transport]  [sonar-1536056664911] bound_address {inet[/127.0.0.1:9001]}, publish_address {inet[/127.0.0.1:9001]}
2018.09.04 12:24:35 INFO   es[o.e.discovery]  [sonar-1536056664911] sonarqube/zCbnUZo6QneHszx8lVazYA
2018.09.04 12:24:38 INFO   es[o.e.cluster.service]  [sonar-1536056664911] new_master [sonar-1536056664911][zCbnUZo6QneHszx8lVazYA][seliius01837.seli.gic.ericsson.se][inet[/127.0.0.1:9001]]{rack_id=sonar-1536056664911}, reason: zen-disco-join (elected_as_master)
2018.09.04 12:24:38 INFO   es[o.elasticsearch.node]  [sonar-1536056664911] started
2018.09.04 12:24:38 INFO   es[o.e.gateway]  [sonar-1536056664911] recovered [6] indices into cluster_state
2018.09.04 12:26:34 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][young][106][64] duration [792ms], collections [1]/[1.7s], total [792ms]/[15.7s], memory [698.6mb]->[708.9mb]/[1.9gb], all_pools {[young] [4kb]->[571.1kb]/[133.1mb]}{[survivor] [16.6mb]->[13mb]/[16.6mb]}{[old] [682mb]->[695.3mb]/[1.8gb]}
2018.09.04 12:27:37 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][young][163][97] duration [740ms], collections [1]/[1.3s], total [740ms]/[25.4s], memory [1gb]->[1gb]/[1.9gb], all_pools {[young] [22.6mb]->[4.1kb]/[133.1mb]}{[survivor] [16.6mb]->[16.6mb]/[16.6mb]}{[old] [1gb]->[1gb]/[1.8gb]}
2018.09.04 12:28:31 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][young][210][126] duration [764ms], collections [1]/[1.2s], total [764ms]/[35.4s], memory [1.4gb]->[1.3gb]/[1.9gb], all_pools {[young] [38.5mb]->[2.1mb]/[133.1mb]}{[survivor] [16.6mb]->[16.6mb]/[16.6mb]}{[old] [1.3gb]->[1.3gb]/[1.8gb]}
2018.09.04 12:30:17 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][297][14] duration [8s], collections [1]/[8.9s], total [8s]/[10.6s], memory [1.9gb]->[1.7gb]/[1.9gb], all_pools {[young] [110.6mb]->[819.8kb]/[133.1mb]}{[survivor] [13.1mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.7gb]/[1.8gb]}
2018.09.04 12:30:23 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][young][302][175] duration [754ms], collections [1]/[1s], total [754ms]/[48.7s], memory [1.8gb]->[1.8gb]/[1.9gb], all_pools {[young] [66.8mb]->[6.3kb]/[133.1mb]}{[survivor] [16.6mb]->[13.2mb]/[16.6mb]}{[old] [1.7gb]->[1.7gb]/[1.8gb]}
2018.09.04 12:30:41 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][310][15] duration [9.9s], collections [1]/[10.7s], total [9.9s]/[20.5s], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [83.1mb]->[2.5mb]/[133.1mb]}{[survivor] [13.6mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:30:51 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][312][16] duration [7.7s], collections [1]/[8.4s], total [7.7s]/[28.2s], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [86.5mb]->[12.8mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:31:06 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][315][17] duration [12.7s], collections [1]/[12.9s], total [12.7s]/[41s], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [133.1mb]->[25mb]/[133.1mb]}{[survivor] [7.4mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:31:15 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][317][18] duration [8s], collections [1]/[8.2s], total [8s]/[49.1s], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [133.1mb]->[36.3mb]/[133.1mb]}{[survivor] [7.5mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:31:29 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][319][19] duration [12.3s], collections [1]/[12.4s], total [12.3s]/[1m], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [133.1mb]->[48.7mb]/[133.1mb]}{[survivor] [4.3mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:31:38 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][321][20] duration [7.6s], collections [1]/[8s], total [7.6s]/[1.1m], memory [1.9gb]->[1.8gb]/[1.9gb], all_pools {[young] [133.1mb]->[55.5mb]/[133.1mb]}{[survivor] [8.4mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:31:52 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][324][21] duration [11.3s], collections [1]/[11.3s], total [11.3s]/[1.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[64.6mb]/[133.1mb]}{[survivor] [16.1mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:00 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][326][22] duration [7.1s], collections [1]/[7.2s], total [7.1s]/[1.4m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[72.2mb]/[133.1mb]}{[survivor] [11.8mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:14 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][328][23] duration [12.5s], collections [1]/[13.1s], total [12.5s]/[1.6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[80.2mb]/[133.1mb]}{[survivor] [1.6mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:22 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][329][24] duration [7.7s], collections [1]/[8.4s], total [7.7s]/[1.8m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [80.2mb]->[86.4mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:36 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][330][25] duration [12.6s], collections [1]/[13.2s], total [12.6s]/[2m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [86.4mb]->[92.8mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:44 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][331][26] duration [7.4s], collections [1]/[8s], total [7.4s]/[2.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [92.8mb]->[96.7mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:32:57 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][333][27] duration [11.7s], collections [1]/[11.8s], total [11.7s]/[2.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[101.9mb]/[133.1mb]}{[survivor] [9.9mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:33:05 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][334][28] duration [7.3s], collections [1]/[8.2s], total [7.3s]/[2.4m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [101.9mb]->[108.4mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:33:18 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][335][29] duration [12.3s], collections [1]/[13.2s], total [12.3s]/[2.6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [108.4mb]->[111.4mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:33:26 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][336][30] duration [7.2s], collections [1]/[7.8s], total [7.2s]/[2.7m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [111.4mb]->[116.3mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:33:39 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][337][31] duration [12.3s], collections [1]/[13.1s], total [12.3s]/[2.9m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [116.3mb]->[116.6mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:33:47 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][338][32] duration [7.3s], collections [1]/[7.7s], total [7.3s]/[3.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [116.6mb]->[119.3mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:00 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][339][33] duration [12.1s], collections [1]/[12.9s], total [12.1s]/[3.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [119.3mb]->[125.5mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:09 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][340][34] duration [8s], collections [1]/[8.7s], total [8s]/[3.4m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [125.5mb]->[124.5mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:22 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][341][35] duration [12.9s], collections [1]/[13.7s], total [12.9s]/[3.6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [124.5mb]->[129.2mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:31 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][342][36] duration [7.9s], collections [1]/[8.3s], total [7.9s]/[3.7m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [129.2mb]->[128.8mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:44 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][343][37] duration [12.8s], collections [1]/[13.3s], total [12.8s]/[4m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [128.8mb]->[131.2mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:34:52 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][344][38] duration [8s], collections [1]/[8.2s], total [8s]/[4.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [131.2mb]->[132.2mb]/[133.1mb]}{[survivor] [0b]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:05 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][345][39] duration [12.4s], collections [1]/[13s], total [12.4s]/[4.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [132.2mb]->[133.1mb]/[133.1mb]}{[survivor] [0b]->[1.2mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:13 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][346][40] duration [7.5s], collections [1]/[8s], total [7.5s]/[4.4m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [1.2mb]->[2.3mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:26 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][347][41] duration [12.3s], collections [1]/[12.6s], total [12.3s]/[4.6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [2.3mb]->[3.3mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:34 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][348][42] duration [7.6s], collections [1]/[8.1s], total [7.6s]/[4.8m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [3.3mb]->[5.8mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:47 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][349][43] duration [12.7s], collections [1]/[12.9s], total [12.7s]/[5m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [5.8mb]->[6.2mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:35:54 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][350][44] duration [7.3s], collections [1]/[7.4s], total [7.3s]/[5.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [6.2mb]->[7.5mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:09 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][351][45] duration [13.6s], collections [1]/[14.1s], total [13.6s]/[5.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [7.5mb]->[8.4mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:17 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][352][46] duration [8s], collections [1]/[8.1s], total [8s]/[5.5m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [8.4mb]->[9.1mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:30 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][353][47] duration [12.7s], collections [1]/[12.8s], total [12.7s]/[5.7m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [9.1mb]->[9.5mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:37 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][354][48] duration [7.1s], collections [1]/[7.2s], total [7.1s]/[5.8m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [9.5mb]->[9.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:50 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][355][49] duration [12.7s], collections [1]/[12.8s], total [12.7s]/[6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [9.6mb]->[11.4mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:36:58 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][356][50] duration [7.4s], collections [1]/[7.9s], total [7.4s]/[6.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [11.4mb]->[11.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:37:10 WARN   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][357][51] duration [12.3s], collections [1]/[12.4s], total [12.3s]/[6.3m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [11.6mb]->[12.1mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 12:37:17 INFO   es[o.e.monitor.jvm]  [sonar-1536056664911] [gc][old][358][52] duration [7.2s], collections [1]/[7.3s], total [7.2s]/[6.5m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [12.1mb]->[13.7mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}

Also when I try to stop the machine:
sudo /etc/init.d/sonar restart
[sudo] password for :
Stopping SonarQube…
Waiting for SonarQube to exit…
Waiting for SonarQube to exit…

And it will not restart.

I am stuck.

Before when there was a problem starting the web server I could see that in the log. No I see nothing more that I have showed. So no error messages at all. Kind a mystic.

How long are you waiting after you re-start the server? I suggest waiting a good bit…several hours. When this has happened to me, the delay was the search engine re-indexing.

Ok I try to be patient…

No need to be:

2018.09.04 13:37:41 INFO   es[o.e.monitor.jvm]  [sonar-1536059839067] [gc][old][387][100] duration [1m], collections [8]/[1m], total [1m]/[14.1m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [16.5mb]->[16.5mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid7493.hprof ...
Heap dump file created [2783878030 bytes in 22.977 secs]
2018.09.04 13:42:35 INFO   es[o.e.monitor.jvm]  [sonar-1536059839067] [gc][old][388][137] duration [4.7m], collections [37]/[4.7m], total [4.7m]/[18.8m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [16.5mb]->[16.6mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
2018.09.04 13:59:37 INFO   es[o.e.monitor.jvm]  [sonar-1536059839067] [gc][old][389][271] duration [16.7m], collections [134]/[17.1m], total [16.7m]/[35.6m], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [133.1mb]->[133.1mb]/[133.1mb]}{[survivor] [16.6mb]->[9.3mb]/[16.6mb]}{[old] [1.8gb]->[1.8gb]/[1.8gb]}
Exception in thread "elasticsearch[sonar-1536059839067][generic][T#20]" java.lang.OutOfMemoryError: Java heap space
2018.09.04 13:59:48 WARN   es[o.e.monitor.jvm]  [sonar-1536059839067] [gc][old][390][272] duration [10.4s], collections [1]/[11s], total [10.4s]/[35.8m], memory [1.9gb]->[1.4gb]/[1.9gb], all_pools {[young] [133.1mb]->[2kb]/[133.1mb]}{[survivor] [9.3mb]->[0b]/[16.6mb]}{[old] [1.8gb]->[1.4gb]/[1.8gb]}
2018.09.04 13:59:48 WARN   es[o.e.index.engine]  [sonar-1536059839067] [tests][4] failed engine [out of memory (source: [delete])]
java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_181]
	at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~[na:1.8.0_181]
	at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) ~[na:1.8.0_181]
	at java.lang.StringBuilder.append(StringBuilder.java:136) ~[na:1.8.0_181]
	at java.lang.Object.toString(Object.java:236) ~[na:1.8.0_181]
	at java.lang.String.valueOf(String.java:2994) ~[na:1.8.0_181]
	at java.lang.StringBuilder.append(StringBuilder.java:131) ~[na:1.8.0_181]
	at org.elasticsearch.index.translog.fs.FsTranslog.add(FsTranslog.java:398) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.engine.InternalEngine.innerDelete(InternalEngine.java:513) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.engine.InternalEngine.delete(InternalEngine.java:457) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.shard.IndexShard.performRecoveryOperation(IndexShard.java:946) [elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:278) [elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112) [elasticsearch-1.7.5.jar:na]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
2018.09.04 13:59:48 WARN   es[o.e.indices.cluster]  [sonar-1536059839067] [[tests][4]] marking and sending shard failed due to [engine failure, reason [out of memory (source: [delete])]]
java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_181]
	at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~[na:1.8.0_181]
	at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) ~[na:1.8.0_181]
	at java.lang.StringBuilder.append(StringBuilder.java:136) ~[na:1.8.0_181]
	at java.lang.Object.toString(Object.java:236) ~[na:1.8.0_181]
	at java.lang.String.valueOf(String.java:2994) ~[na:1.8.0_181]
	at java.lang.StringBuilder.append(StringBuilder.java:131) ~[na:1.8.0_181]
	at org.elasticsearch.index.translog.fs.FsTranslog.add(FsTranslog.java:398) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.engine.InternalEngine.innerDelete(InternalEngine.java:513) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.engine.InternalEngine.delete(InternalEngine.java:457) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.shard.IndexShard.performRecoveryOperation(IndexShard.java:946) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:278) ~[elasticsearch-1.7.5.jar:na]
	at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112) ~[elasticsearch-1.7.5.jar:na]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
2018.09.04 13:59:48 WARN   es[o.e.c.action.shard]  [sonar-1536059839067] [tests][4] received shard failed for [tests][4], node[ICTYYeRMQoCpmIKEm_wrjQ], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2018-09-04T11:17:25.238Z]], indexUUID [br1QbpUdQC-mwpecn9SFCQ], reason [shard failure [engine failure, reason [out of memory (source: [delete])]][OutOfMemoryError[Java heap space]]]
2018.09.04 13:59:48 WARN   es[o.e.index.engine]  [sonar-1536059839067] [tests][2] failed engine [out of memory (source: [delete])]
java.lang.OutOfMemoryError: Java heap space
2018.09.04 13:59:48 WARN   es[o.e.indices.cluster]  [sonar-1536059839067] [[tests][2]] marking and sending shard failed due to [engine failure, reason [out of memory (source: [delete])]]
java.lang.OutOfMemoryError: Java heap space
2018.09.04 13:59:48 WARN   es[o.e.c.action.shard]  [sonar-1536059839067] [tests][2] received shard failed for [tests][2], node[ICTYYeRMQoCpmIKEm_wrjQ], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2018-09-04T11:17:25.238Z]], indexUUID [br1QbpUdQC-mwpecn9SFCQ], reason [shard failure [engine failure, reason [out of memory (source: [delete])]][OutOfMemoryError[Java heap space]]]

What do I need to change?

br,

//mikael

This seems to be the start command:

2018.09.04 13:17:19 INFO  app[o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx2G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonar/temp/sq-process59508038885362601properties

Something fishy here?

//mikael

@eraonel: please store logs in text files or at least use formatted blocks (``` marker) to share logs, otherwise it’s hardly readable.

Did you get a chance to check resource consumption as mentioned ?

You can find some monitoring tips here:

Sorry about the messy logs.

What ’ OS monitoring tools’ do you want me to run to give you the correct information?

br,

//mikael

Am not having any specific generic tool in mind. For starters just look at CPU/RAM consumption at OS level (using method of your choice). Then to look more closely on the Java side (based on Architecture documentation I’ve shared earlier) you could use tools like https://visualvm.github.io/ .

Hi,

Monitoring memory,cpu and disk on server did not give anything.

However I got a heap dump:

It shows the following ( see below):

What do you suggest me to do?

br,

//mikael

Ps. I did not upload complete *.hprof since it is 2655MB

"elasticsearch[sonar-1536059839067][scheduler][T#1]" daemon prio=5 tid=14 RUNNABLE
	at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
	at java.util.concurrent.ConcurrentHashMap.transfer(ConcurrentHashMap.java:2374)
	   Local Variable: java.util.concurrent.ConcurrentHashMap$Node[]#7
	at java.util.concurrent.ConcurrentHashMap.addCount(ConcurrentHashMap.java:2288)
	at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1070)
	at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
	at sun.util.resources.ParallelListResourceBundle.loadLookupTablesIfNecessary(ParallelListResourceBundle.java:169)
	   Local Variable: java.util.concurrent.ConcurrentHashMap#47
	   Local Variable: java.lang.Object[]#129
	   Local Variable: java.lang.Object[][]#1
	at sun.util.resources.ParallelListResourceBundle.handleKeySet(ParallelListResourceBundle.java:134)
	at sun.util.resources.ParallelListResourceBundle.keySet(ParallelListResourceBundle.java:143)
	at sun.util.resources.ParallelListResourceBundle.containsKey(ParallelListResourceBundle.java:129)
	   Local Variable: sun.text.resources.FormatData#1
	at sun.util.resources.ParallelListResourceBundle$KeySet.contains(ParallelListResourceBundle.java:208)
	   Local Variable: sun.util.resources.ParallelListResourceBundle$KeySet#2
	at sun.util.resources.ParallelListResourceBundle.containsKey(ParallelListResourceBundle.java:129)
	   Local Variable: sun.text.resources.en.FormatData_en#1
	at sun.util.resources.ParallelListResourceBundle$KeySet.contains(ParallelListResourceBundle.java:208)
	   Local Variable: sun.util.resources.ParallelListResourceBundle$KeySet#1
	at sun.util.resources.ParallelListResourceBundle.containsKey(ParallelListResourceBundle.java:129)
	   Local Variable: java.lang.String#13531
	at java.text.DateFormatSymbols.initializeData(DateFormatSymbols.java:716)
	   Local Variable: java.lang.ref.SoftReference#10
	   Local Variable: java.text.DateFormatSymbols#2
	   Local Variable: sun.text.resources.en.FormatData_en_US#1
	at java.text.DateFormatSymbols.<init>(DateFormatSymbols.java:145)
	   Local Variable: java.text.DateFormatSymbols#1
	at sun.util.locale.provider.DateFormatSymbolsProviderImpl.getInstance(DateFormatSymbolsProviderImpl.java:85)
	at java.text.DateFormatSymbols.getProviderInstance(DateFormatSymbols.java:364)
	   Local Variable: sun.util.locale.provider.JRELocaleProviderAdapter#1
	   Local Variable: sun.util.locale.provider.DateFormatSymbolsProviderImpl#1
	   Local Variable: java.util.Locale#1
	at java.text.DateFormatSymbols.getInstance(DateFormatSymbols.java:340)
	at java.util.Calendar.getDisplayName(Calendar.java:2110)
	   Local Variable: java.util.GregorianCalendar#1
	at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1125)
	at java.text.SimpleDateFormat.format(SimpleDateFormat.java:966)
	   Local Variable: java.text.DontCareFieldPosition$1#1
	at java.text.SimpleDateFormat.format(SimpleDateFormat.java:936)
	   Local Variable: java.lang.StringBuffer#1
	   Local Variable: java.text.DontCareFieldPosition#1
	at java.text.DateFormat.format(DateFormat.java:345)
	   Local Variable: java.util.Date#1
	   Local Variable: java.text.SimpleDateFormat#1
	at ch.qos.logback.core.util.CachingDateFormatter.format(CachingDateFormatter.java:49)
	   Local Variable: ch.qos.logback.core.util.CachingDateFormatter#1
	at ch.qos.logback.classic.pattern.DateConverter.convert(DateConverter.java:63)
	at ch.qos.logback.classic.pattern.DateConverter.convert(DateConverter.java:23)
	at ch.qos.logback.core.pattern.FormattingConverter.write(FormattingConverter.java:37)
	at ch.qos.logback.core.pattern.PatternLayoutBase.writeLoopOnConverters(PatternLayoutBase.java:119)
	   Local Variable: java.lang.StringBuilder#1
	   Local Variable: ch.qos.logback.classic.pattern.DateConverter#1
	at ch.qos.logback.classic.PatternLayout.doLayout(PatternLayout.java:149)
	at ch.qos.logback.classic.PatternLayout.doLayout(PatternLayout.java:39)
	   Local Variable: ch.qos.logback.classic.PatternLayout#1
	at ch.qos.logback.core.encoder.LayoutWrappingEncoder.doEncode(LayoutWrappingEncoder.java:134)
	   Local Variable: ch.qos.logback.classic.encoder.PatternLayoutEncoder#1
	at ch.qos.logback.core.OutputStreamAppender.writeOut(OutputStreamAppender.java:194)
	at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:219)
	at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
	at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
	at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
	   Local Variable: java.util.concurrent.CopyOnWriteArrayList$COWIterator#1
	   Local Variable: ch.qos.logback.core.ConsoleAppender#1
	   Local Variable: ch.qos.logback.core.spi.AppenderAttachableImpl#1
	at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
	at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
	   Local Variable: ch.qos.logback.classic.Logger#1
	at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
	   Local Variable: ch.qos.logback.classic.spi.LoggingEvent#1
	at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
	   Local Variable: ch.qos.logback.core.spi.FilterReply#2
	at ch.qos.logback.classic.Logger.log(Logger.java:788)
	   Local Variable: ch.qos.logback.classic.Logger#210
	   Local Variable: ch.qos.logback.classic.Level#7
	   Local Variable: java.lang.String#1497
	at org.elasticsearch.common.logging.slf4j.Slf4jESLogger.internalInfo(Slf4jESLogger.java:125)
	   Local Variable: org.elasticsearch.common.logging.slf4j.Slf4jESLogger#64
	   Local Variable: java.lang.String#1225
	at org.elasticsearch.common.logging.support.AbstractESLogger.info(AbstractESLogger.java:81)
	at org.elasticsearch.monitor.jvm.JvmMonitorService$JvmMonitor.monitorLongGc(JvmMonitorService.java:205)
	   Local Variable: org.elasticsearch.monitor.jvm.JvmStats#2
	at org.elasticsearch.monitor.jvm.JvmMonitorService$JvmMonitor.run(JvmMonitorService.java:148)
	   Local Variable: org.elasticsearch.monitor.jvm.JvmMonitorService$JvmMonitor#1
	at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:508)
	   Local Variable: org.elasticsearch.threadpool.ThreadPool$LoggingRunnable#1
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	   Local Variable: java.util.concurrent.Executors$RunnableAdapter#102
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	   Local Variable: java.util.concurrent.ScheduledThreadPoolExecutor#1
	   Local Variable: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#104
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	   Local Variable: java.util.concurrent.ThreadPoolExecutor$Worker#26
	at java.lang.Thread.run(Thread.java:748)

  
"elasticsearch[sonar-1536059839067][listener][T#1]" daemon prio=5 tid=45 WAITING
	at sun.misc.Unsafe.park(Native Method)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:737)
	   Local Variable: java.util.concurrent.LinkedTransferQueue$Node#4
	   Local Variable: java.util.concurrent.LinkedTransferQueue#7
	   Local Variable: java.util.concurrent.LinkedTransferQueue$Node#5
	at java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:647)
	at java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1269)
	at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
	   Local Variable: org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#14
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	   Local Variable: java.util.concurrent.ThreadPoolExecutor$Worker#9
	at java.lang.Thread.run(Thread.java:748)

  
"elasticsearch[sonar-1536059839067][generic][T#4]" daemon prio=5 tid=40 RUNNABLE
	at java.util.Arrays.copyOfRange(Arrays.java:3664)
	   Local Variable: char[]#1224
	at java.lang.String.<init>(String.java:207)
	at org.apache.lucene.util.CharsRef.toString(CharsRef.java:210)
	at org.apache.lucene.util.CharsRefBuilder.toString(CharsRefBuilder.java:162)
	at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:286)
	at org.elasticsearch.index.translog.Translog$Delete.readFrom(Translog.java:609)
	at org.elasticsearch.index.translog.ChecksummedTranslogStream.read(ChecksummedTranslogStream.java:68)
	   Local Variable: org.elasticsearch.index.translog.BufferedChecksumStreamInput#2
	   Local Variable: org.elasticsearch.index.translog.Translog$Delete#2
	at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:267)
	   Local Variable: java.io.File#85
	   Local Variable: org.elasticsearch.common.io.stream.InputStreamStreamInput#1
	   Local Variable: java.util.HashSet#331
	   Local Variable: org.elasticsearch.index.gateway.local.LocalIndexShardGateway#17
	at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112)
	   Local Variable: org.elasticsearch.indices.recovery.RecoveryState#18
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	   Local Variable: org.elasticsearch.index.gateway.IndexShardGatewayService$1#1
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	   Local Variable: java.util.concurrent.ThreadPoolExecutor$Worker#4
	at java.lang.Thread.run(Thread.java:748)

  
"elasticsearch[sonar-1536059839067][transport_client_timer][T#1]{Hashed wheel timer #1}" daemon prio=5 tid=21 RUNNABLE
	at org.elasticsearch.common.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
	   Local Variable: org.elasticsearch.common.netty.util.HashedWheelTimer$Worker#1
	at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	   Local Variable: java.lang.String#20722
	   Local Variable: org.elasticsearch.common.netty.util.ThreadRenamingRunnable#1
	   Local Variable: java.lang.String#20036
	at java.lang.Thread.run(Thread.java:748)

  
"Reference Handler" daemon prio=10 tid=2 BLOCKED
	at java.lang.Object.wait(Native Method)
	at java.lang.Object.wait(Object.java:502)
	at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
	at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)

  
"elasticsearch[sonar-1536059839067][transport_client_worker][T#1]{New I/O worker #1}" daemon prio=5 tid=17 RUNNABLE
	at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
	at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
	   Local Variable: sun.nio.ch.EPollArrayWrapper#7
	at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
	at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
	   Local Variable: java.util.Collections$UnmodifiableSet#49
	   Local Variable: sun.nio.ch.Util$3#7
	at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
	at org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
	at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
	at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
	   Local Variable: sun.nio.ch.EPollSelectorImpl#7
	at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
	at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
	   Local Variable: org.elasticsearch.common.netty.channel.socket.nio.NioWorker#5
	at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	   Local Variable: org.elasticsearch.common.netty.util.ThreadRenamingRunnable#11
	   Local Variable: java.lang.String#20743
	   Local Variable: java.lang.String#20038
	at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	   Local Variable: org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1#10
	   Local Variable: java.util.concurrent.ThreadPoolExecutor#6
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	   Local Variable: java.util.concurrent.ThreadPoolExecutor$Worker#18
	at java.lang.Thread.run(Thread.java:748)

  
"elasticsearch[sonar-1536059839067][transport_client_boss][T#1]{New I/O boss #5}" daemon prio=5 tid=22 RUNNABLE
	at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1038)
	   Local Variable: java.util.Collections$UnmodifiableSet#48
	at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:118)
	at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
	at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
	   Local Variable: sun.nio.ch.EPollSelectorImpl#6
	at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
	   Local Variable: org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss#1
	at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	   Local Variable: java.lang.String#20034
	   Local Variable: java.lang.String#20735
	   Local Variable: org.elasticsearch.common.netty.util.ThreadRenamingRunnable#7
	at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	   Local Variable: org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1#6
	   Local Variable: java.util.concurrent.ThreadPoolExecutor#5
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	   Local Variable: java.util.concurrent.ThreadPoolExecutor$Worker#17
	at java.lang.Thread.run(Thread.java:748)

  
"Finalizer" daemon prio=8 tid=3 WAITING
	at java.lang.Object.wait(Native Method)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144)
	   Local Variable: java.lang.ref.ReferenceQueue#166
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216)
	   Local Variable: java.lang.System$2#1

  
"elasticsearch[sonar-1536059839067][master_mapping_updater]" prio=5 tid=16 TIMED_WAITING
	at sun.misc.Unsafe.park(Native Method)
	at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:734)
	   Local Variable: java.util.concurrent.LinkedTransferQueue$Node#3
	   Local Variable: java.util.concurrent.LinkedTransferQueue$Node#10
	   Local Variable: java.util.concurrent.ThreadLocalRandom#1
	   Local Variable: java.util.concurrent.LinkedTransferQueue#15
	at java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:647)
	at java.util.concurrent.LinkedTransferQueue.poll(LinkedTransferQueue.java:1277)
	at org.elasticsearch.cluster.action.index.MappingUpdatedAction$MasterMappingUpdater.run(MappingUpdatedAction.java:382)
	   Local Variable: java.util.HashMap#1389

  
"elasticsearch[sonar-1536059839067][management][T#4]" daemon prio=5 tid=47 RUNNABLE

Do you think an upgrade will solve the problem?

I have upgraded from 5.6.5 --> 6.7.5 and I don’t see the problem anymore ( at least for new :slight_smile:)
Same conditions in terms of memory and disk apply to the server.

//mike

1 Like