2019.08.23 00:47:29 DEBUG es[][o.e.b.SystemCallFilter] BSD RLIMIT_NPROC initialization successful 2019.08.23 00:47:29 DEBUG es[][o.e.b.SystemCallFilter] OS X seatbelt initialization successful 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.class.path: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar:/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] sun.boot.class.path: null 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.c.n.IfConfig] configuration: lo0 inet 127.0.0.1 netmask:255.0.0.0 scope:host inet6 fe80::1 prefixlen:64 scope:link inet6 ::1 prefixlen:128 scope:host UP MULTICAST LOOPBACK mtu:16384 index:1 en10 inet6 fe80::aede:48ff:fe00:1122 prefixlen:64 scope:link hardware AC:DE:48:00:11:22 UP MULTICAST mtu:1500 index:4 en0 inet 192.168.2.100 netmask:255.255.255.0 broadcast:192.168.2.255 scope:site inet 192.168.220.43 netmask:255.255.255.0 broadcast:192.168.220.255 scope:site inet6 2003:dc:df1b:988b:edc7:b91e:e554:828f prefixlen:64 inet6 2003:dc:df1b:988b:88a:5434:dcea:fc68 prefixlen:64 inet6 2003:dc:df1b:982b:965:4312:2091:235e prefixlen:64 inet6 2003:dc:df1b:982b:8c4:6d71:9b5f:5e84 prefixlen:64 inet6 2003:dc:df1b:98aa:9b:9285:cc39:44cc prefixlen:64 inet6 2003:dc:df1b:98aa:10e9:b180:5c26:9d58 prefixlen:64 inet6 fe80::14b0:5fec:df4d:4390 prefixlen:64 scope:link hardware F0:18:98:26:95:83 UP MULTICAST mtu:1500 index:6 en5 inet 192.168.2.199 netmask:255.255.255.0 broadcast:192.168.2.255 scope:site inet 192.168.220.42 netmask:255.255.255.0 broadcast:192.168.220.255 scope:site inet6 2003:dc:df1b:988b:20e7:b959:766e:eec4 prefixlen:64 inet6 2003:dc:df1b:988b:4d7:bb62:3949:aea prefixlen:64 inet6 fe80::c3d:7cf6:f9ff:2d4c prefixlen:64 scope:link hardware 48:65:EE:1A:19:6B UP MULTICAST mtu:1500 index:7 awdl0 inet6 fe80::3496:6ff:fe2e:bd4e prefixlen:64 scope:link hardware 36:96:06:2E:BD:4E UP MULTICAST mtu:1484 index:14 llw0 inet6 fe80::3496:6ff:fe2e:bd4e prefixlen:64 scope:link hardware 36:96:06:2E:BD:4E UP MULTICAST mtu:1500 index:15 utun0 inet6 fe80::45f2:133:2dfd:1b26 prefixlen:64 scope:link UP MULTICAST POINTOPOINT mtu:1380 index:16 utun1 inet6 fe80::558b:a4d3:590a:e816 prefixlen:64 scope:link UP MULTICAST POINTOPOINT mtu:2000 index:17 utun2 inet6 fe80::1043:bac1:3a57:62d3 prefixlen:64 scope:link UP MULTICAST POINTOPOINT mtu:1380 index:18 utun3 inet6 fe80::4ec:315a:59c8:90c2 prefixlen:64 scope:link UP MULTICAST POINTOPOINT mtu:1380 index:19 2019.08.23 00:47:29 DEBUG es[][o.e.e.NodeEnvironment] using node location [[NodePath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0, indicesPath=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices, fileStore=/ (/dev/disk2s5), majorDeviceNumber=-1, minorDeviceNumber=-1}]], local_lock_id [0] 2019.08.23 00:47:29 DEBUG es[][o.e.e.NodeEnvironment] node data locations details: -> /usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0, free_space [984.6gb], usable_space [86.7gb], total_space [1.8tb], mount [/ (/dev/disk2s5)], type [apfs] 2019.08.23 00:47:29 INFO es[][o.e.e.NodeEnvironment] heap size [989.8mb], compressed ordinary object pointers [true] 2019.08.23 00:47:29 INFO es[][o.e.n.Node] node name [sonarqube], node ID [I1tMMakFQYSWwmrZ3iaweQ] 2019.08.23 00:47:29 INFO es[][o.e.n.Node] version[6.8.0], pid[27897], build[default/tar/65b6179/2019-05-15T20:06:13.172855Z], OS[Mac OS X/10.15/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/12.0.2/12.0.2+10] 2019.08.23 00:47:29 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/usr/local/Cellar/sonarqube/7.9.1/libexec/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.enforce.bootstrap.checks=true, -Xms1024m, -Xmx1024m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch, -Des.path.conf=/usr/local/Cellar/sonarqube/7.9.1/libexec/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar] 2019.08.23 00:47:29 DEBUG es[][o.e.n.Node] using config [/usr/local/Cellar/sonarqube/7.9.1/libexec/temp/conf/es], data [[/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6]], logs [/usr/local/Cellar/sonarqube/7.9.1/libexec/logs], plugins [/usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/plugins] 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/percolator/percolator-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/percolator/percolator-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/repository-url/repository-url-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/repository-url/repository-url-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/lang-painless-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/lang-painless-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/parent-join/parent-join-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/parent-join/parent-join-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/commons-logging-1.1.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/elasticsearch-rest-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/elasticsearch-ssl-config-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpcore-4.4.5.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/reindex-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/commons-codec-1.10.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpclient-4.5.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpasyncclient-4.1.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpcore-nio-4.4.5.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/elasticsearch-ssl-config-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpcore-4.4.5.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/elasticsearch-rest-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpclient-4.5.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpasyncclient-4.1.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/httpcore-nio-4.4.5.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/commons-logging-1.1.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/reindex-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/reindex/commons-codec-1.10.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/lang-painless-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/analysis-common/analysis-common-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/lang-painless-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/analysis-common/analysis-common-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/transport-netty4-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-codec-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-common-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-buffer-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-handler-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-resolver-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-transport-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/transport-netty4-client-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-buffer-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-handler-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-transport-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-codec-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-common-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/transport-netty4/netty-resolver-4.1.32.Final.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/mapper-extras/mapper-extras-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] java.home: /Library/Java/JavaVirtualMachines/jdk-12.0.2.jdk/Contents/Home 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-core-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jopt-simple-5.0.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-launchers-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queries-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-queryparser-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-core-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/snakeyaml-1.17.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-core-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/hppc-0.7.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/spatial4j-0.7.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-extras-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-highlighter-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-join-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-secure-sm-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-core-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-1.2-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/plugin-classloader-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jna-4.5.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/HdrHistogram-2.1.9.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-backward-codecs-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-x-content-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/joda-time-2.10.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/log4j-api-2.11.1.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-analyzers-common-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/java-version-checker-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-suggest-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jts-core-1.15.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-sandbox-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-grouping-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-spatial3d-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/t-digest-3.2.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-misc-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/modules/mapper-extras/mapper-extras-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-6.8.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/lucene-memory-7.7.0.jar 2019.08.23 00:47:29 DEBUG es[][o.e.b.JarHell] examining jar: /usr/local/Cellar/sonarqube/7.9.1/libexec/elasticsearch/lib/elasticsearch-cli-6.8.0.jar 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [mapper-extras] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [percolator] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [repository-url] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2019.08.23 00:47:29 INFO es[][o.e.p.PluginsService] no plugins loaded 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [24], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [listener], size [6], queue size [unbounded] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [index], size [12], queue size [200] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [6], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [5], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search] will adjust queue by [50] when determining automatic queue size 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [19], queue size [1k] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [5], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [24], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [12], queue size [1k] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [12], queue size [200] 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [5], keep alive [5m] 2019.08.23 00:47:30 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search_throttled] will adjust queue by [50] when determining automatic queue size 2019.08.23 00:47:30 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100] 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] Platform: MacOS 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe) 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 12 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): unavailable 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 1037959168 bytes (maybe) 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /usr/local/Cellar/sonarqube/7.9.1/libexec/temp (java.io.tmpdir) 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model) 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable at io.netty.util.internal.CleanerJava9.(CleanerJava9.java:68) [netty-common-4.1.32.Final.jar:4.1.32.Final] at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:172) [netty-common-4.1.32.Final.jar:4.1.32.Final] at io.netty.util.ConstantPool.(ConstantPool.java:32) [netty-common-4.1.32.Final.jar:4.1.32.Final] at io.netty.util.AttributeKey$1.(AttributeKey.java:27) [netty-common-4.1.32.Final.jar:4.1.32.Final] at io.netty.util.AttributeKey.(AttributeKey.java:27) [netty-common-4.1.32.Final.jar:4.1.32.Final] at org.elasticsearch.transport.netty4.Netty4Transport.(Netty4Transport.java:219) [transport-netty4-client-6.8.0.jar:6.8.0] at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57) [transport-netty4-client-6.8.0.jar:6.8.0] at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89) [elasticsearch-6.8.0.jar:6.8.0] at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) [?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654) [?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?] at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.node.Node.(Node.java:356) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.node.Node.(Node.java:266) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:212) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:212) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.8.0.jar:6.8.0] at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) [elasticsearch-6.8.0.jar:6.8.0] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) [elasticsearch-6.8.0.jar:6.8.0] 2019.08.23 00:47:30 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true 2019.08.23 00:47:30 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [100], expire [0s] 2019.08.23 00:47:31 WARN es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version. 2019.08.23 00:47:31 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] 2019.08.23 00:47:31 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s] 2019.08.23 00:47:31 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s] 2019.08.23 00:47:31 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s] 2019.08.23 00:47:31 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s] 2019.08.23 00:47:31 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] 2019.08.23 00:47:31 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2] 2019.08.23 00:47:31 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] 2019.08.23 00:47:31 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:31 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [98.9mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] 2019.08.23 00:47:31 DEBUG es[][o.e.g.GatewayMetaState] took 9ms to load state 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.SettingsBasedHostsProvider] using initial hosts [127.0.0.1, [::1]] 2019.08.23 00:47:31 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings] 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.UnicastZenPing] using concurrent_connects [10], resolve_timeout [5s] 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [1] 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.ZenDiscovery] using ping_timeout [3s], join.timeout [1m], master_election.ignore_non_master [false] 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.MasterFaultDetection] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3] 2019.08.23 00:47:31 DEBUG es[][o.e.d.z.NodesFaultDetection] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3] 2019.08.23 00:47:31 DEBUG es[][o.e.i.r.RecoverySettings] using max_bytes_per_sec[40mb] 2019.08.23 00:47:31 INFO es[][o.e.n.Node] initialized 2019.08.23 00:47:31 INFO es[][o.e.n.Node] starting ... 2019.08.23 00:47:31 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 24 2019.08.23 00:47:32 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true 2019.08.23 00:47:32 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512 2019.08.23 00:47:32 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable 2019.08.23 00:47:32 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[24], port[9001], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb] 2019.08.23 00:47:32 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1] 2019.08.23 00:47:32 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 27897 (auto-detected) 2019.08.23 00:47:32 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false 2019.08.23 00:47:32 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false 2019.08.23 00:47:32 DEBUG es[][i.n.u.NetUtil] Loopback interface: lo0 (lo0, 0:0:0:0:0:0:0:1%lo0) 2019.08.23 00:47:32 DEBUG es[][i.n.u.NetUtil] Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128 2019.08.23 00:47:32 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 48:65:ee:ff:fe:1a:19:6b (auto-detected) 2019.08.23 00:47:32 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 2019.08.23 00:47:32 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 2019.08.23 00:47:32 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple 2019.08.23 00:47:32 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 10 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 10 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192 2019.08.23 00:47:32 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true 2019.08.23 00:47:32 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled 2019.08.23 00:47:32 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0 2019.08.23 00:47:32 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384 2019.08.23 00:47:32 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:9001} 2019.08.23 00:47:32 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2019.08.23 00:47:32 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2019.08.23 00:47:32 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s] 2019.08.23 00:47:33 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled 2019.08.23 00:47:33 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled 2019.08.23 00:47:33 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled 2019.08.23 00:47:33 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled 2019.08.23 00:47:33 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true 2019.08.23 00:47:33 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true 2019.08.23 00:47:33 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@7ec9c466 2019.08.23 00:47:33 DEBUG es[][o.e.a.a.c.h.TransportClusterHealthAction] no known master node, scheduling a retry 2019.08.23 00:47:35 DEBUG es[][o.e.d.z.ZenDiscovery] filtered ping responses: (ignore_non_masters [false]) --> ping_response{node [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}], id[7], master [null],cluster_state_version [-1], cluster_name[sonarqube]} 2019.08.23 00:47:35 DEBUG es[][o.e.d.z.ZenDiscovery] elected as master, waiting for incoming joins ([0] needed) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [zen-disco-elected-as-master ([0] nodes joined)]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [1], source [zen-disco-elected-as-master ([0] nodes joined)] 2019.08.23 00:47:35 INFO es[][o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [1] 2019.08.23 00:47:35 DEBUG es[][o.e.d.z.ZenDiscovery] got first state from fresh master [I1tMMakFQYSWwmrZ3iaweQ] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [1], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])] 2019.08.23 00:47:35 INFO es[][o.e.c.s.ClusterApplierService] new_master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]]) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 1 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 1 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 1 2019.08.23 00:47:35 INFO es[][o.e.n.Node] started 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]: took [8ms] done applying updated cluster state (version: 1, uuid: f0oiRNgCSZiQkccXii-_fA) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [zen-disco-elected-as-master ([0] nodes joined)]: took [27ms] done publishing updated cluster state (version: 1, uuid: f0oiRNgCSZiQkccXii-_fA) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: took [0s] no change in cluster state 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[components/TPwmMZYuQQqmtVRoqx4Uwg]], shards [5]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/_U9_HYtTTOmGosf1Chf5TQ]], shards [2]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]], shards [1]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[views/NlCIf77XRtq3dgewPHB03g]], shards [5]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/TBmEmRS7QsKk1yDUlglMag]], shards [5]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [98.9mb] max filter count [10000] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]], shards [5]/[0] - reason [metadata verification] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/3] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/2] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/4] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/1] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/2] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/3] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/1] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/4] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/1] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/4] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/2] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/3] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/1] 2019.08.23 00:47:35 DEBUG es[][o.e.c.r.a.a.BalancedShardsAllocator] skipping rebalance due to in-flight shard/store fetches 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/4] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [2], source [local-gateway-elected-state] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [2] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [2], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 2 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 2 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/1] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/HtzuEzqcRAank_tKjNQ0NQ/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/HtzuEzqcRAank_tKjNQ0NQ/0] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/3] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/2] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/V32Q5YbdRFq8mRdIkCKsaw]] cleaning index, no longer part of the metadata 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/VOXUo5zSTz6KJv_mSad_3g]] cleaning index, no longer part of the metadata 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] shard state info found: [primary [true], allocation [[id=b9q2HnpLTJ-i0bA7NYhJnQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] shard state info found: [primary [true], allocation [[id=ttF8NBKkRdqP58cMygTI9w]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] shard state info found: [primary [true], allocation [[id=_AXuADNOTA6OY4SGdXY03A]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] shard state info found: [primary [true], allocation [[id=u9PoXBz7RnmiYFrSp0RY2A]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] shard state info found: [primary [true], allocation [[id=kldfU2lRR7G6bysmQ2F7nA]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] shard state info found: [primary [true], allocation [[id=IGCiyzVrTLmkiG8ioymnHQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] shard state info found: [primary [true], allocation [[id=XI7oVT9BTxi3pwBUczdWhA]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] shard state info found: [primary [true], allocation [[id=kPw6C4KcS1yaLM5ZEbWYWA]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] shard state info found: [primary [true], allocation [[id=QiBlwaS8S4irZL3_H2RMTQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] shard state info found: [primary [true], allocation [[id=B2DYUVYFS4Ol-s82vMn4jg]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] shard state info found: [primary [true], allocation [[id=CXRhRnXoQhyfNBmS5Y5JeQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] shard state info found: [primary [true], allocation [[id=-NjrKFaSRR2UdvfY0ppwkQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] shard state info found: [primary [true], allocation [[id=x9vJP5wtQvOcvRX2hJ4klA]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] shard state info found: [primary [true], allocation [[id=8QfI44XCQRmQ045PT7QLrw]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] shard state info found: [primary [true], allocation [[id=EWVVoXKtReC01s30YTT92w]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] shard state info found: [primary [true], allocation [[id=--xK9rstQdO7WRkQj4n8rA]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] shard state info found: [primary [true], allocation [[id=I2jjVtHsQEuvpXo8XHqwqQ]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] shard state info found: [primary [true], allocation [[id=ZIkWRbkYRQm0Ea97FVXoDg]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] shard state info found: [primary [true], allocation [[id=f389_ZJ9Qbq2k16Gd31PLw]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] shard state info found: [primary [true], allocation [[id=mwAcEDEDS5agNguNPCkF9A]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] shard state info found: [primary [true], allocation [[id=aoX1x544Q9KShyYCx_iIGw]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] shard state info found: [primary [true], allocation [[id=krkR_pDSSVKdPumpgwqdpg]]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] shard state info found: [primary [true], allocation [[id=eONJIbZZRCKBiYYhHXmm_Q]]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 2 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: took [146ms] done applying updated cluster state (version: 2, uuid: 0mCSk8EdQSW4E2gqdFsNCA) 2019.08.23 00:47:35 INFO es[][o.e.g.GatewayService] recovered [6] indices into cluster_state 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: took [210ms] done publishing updated cluster state (version: 2, uuid: 0mCSk8EdQSW4E2gqdFsNCA) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [cluster_reroute(async_shard_fetch)]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[mwAcEDEDS5agNguNPCkF9A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][0]: allocating [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][4]: found 1 allocation candidates of [views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[_AXuADNOTA6OY4SGdXY03A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][4]: allocating [[views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][3]: found 1 allocation candidates of [views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[x9vJP5wtQvOcvRX2hJ4klA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][3]: allocating [[views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[8QfI44XCQRmQ045PT7QLrw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][1]: allocating [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[I2jjVtHsQEuvpXo8XHqwqQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][2]: throttling allocation [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@d8e1900]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[f389_ZJ9Qbq2k16Gd31PLw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@48a271fa]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[b9q2HnpLTJ-i0bA7NYhJnQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@50e2565]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[ttF8NBKkRdqP58cMygTI9w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@266b04c2]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[-NjrKFaSRR2UdvfY0ppwkQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7fd86eaf]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[kPw6C4KcS1yaLM5ZEbWYWA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@655cccc3]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[B2DYUVYFS4Ol-s82vMn4jg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9adf9ac]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[ZIkWRbkYRQm0Ea97FVXoDg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6ee397dd]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@35722df4]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3f105f90]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@41147d37]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3dfbdb41]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6d5a0ca5]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@573176c1]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@701f515d]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9fa7155]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@872554c]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9902e0d]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7fa9a864]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [3], source [cluster_reroute(async_shard_fetch)] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [3] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [3], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 3 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 3 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[views/NlCIf77XRtq3dgewPHB03g]] creating index 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[views/NlCIf77XRtq3dgewPHB03g]], shards [5]/[0] - reason [create index] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] [[views/NlCIf77XRtq3dgewPHB03g]] added mapping [view], source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][1] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/1] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][1] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/1, shard=[views][1]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [views][1] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][3] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/3] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][3] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/3, shard=[views][3]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [views][3] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=yWtQVqRfS2KBeHKniN4gDg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zW_yVgJoSkiFC7AhGjQ6sA, translog_generation=5, translog_uuid=dSGp-m5ISIq42Z320gj-Vw}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][4] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/4] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][4] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/4, shard=[views][4]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [views][4] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=Ulx-y4pqRVSV5vMCy4NCLw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=1BLNHRpWQFWrDlVXJo3A1g, translog_generation=5, translog_uuid=r34Mw2bTQu2XYcU7bJ1mww}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][0] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/0] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/0, shard=[views][0]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [views][0] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=8o9IldpnSty8xh5izKC7Tg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9aXDwIOGTqK0USwUin286Q, translog_generation=5, translog_uuid=XzOHtCoYT1eC8Hg5oTpYOQ}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=goEelBE9R8aKGojZSeILaA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=5VZXg87PTYeyfDpJ1To7hw, translog_generation=5, translog_uuid=jyvLWB8QQHyvsWXmt7oTNw}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 3 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])]: took [163ms] done applying updated cluster state (version: 3, uuid: DYPh1XLwQtiK7CFv3HPg_w) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [cluster_reroute(async_shard_fetch)]: took [179ms] done publishing updated cluster state (version: 3, uuid: DYPh1XLwQtiK7CFv3HPg_w) 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=Ulx-y4pqRVSV5vMCy4NCLw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=1BLNHRpWQFWrDlVXJo3A1g, translog_generation=5, translog_uuid=r34Mw2bTQu2XYcU7bJ1mww}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=Ulx-y4pqRVSV5vMCy4NCLw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=1BLNHRpWQFWrDlVXJo3A1g, translog_generation=5, translog_uuid=r34Mw2bTQu2XYcU7bJ1mww}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=yWtQVqRfS2KBeHKniN4gDg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zW_yVgJoSkiFC7AhGjQ6sA, translog_generation=5, translog_uuid=dSGp-m5ISIq42Z320gj-Vw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=yWtQVqRfS2KBeHKniN4gDg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zW_yVgJoSkiFC7AhGjQ6sA, translog_generation=5, translog_uuid=dSGp-m5ISIq42Z320gj-Vw}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [188ms] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [148ms] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=8o9IldpnSty8xh5izKC7Tg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9aXDwIOGTqK0USwUin286Q, translog_generation=5, translog_uuid=XzOHtCoYT1eC8Hg5oTpYOQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=8o9IldpnSty8xh5izKC7Tg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9aXDwIOGTqK0USwUin286Q, translog_generation=5, translog_uuid=XzOHtCoYT1eC8Hg5oTpYOQ}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] starting shard [views][1], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=8QfI44XCQRmQ045PT7QLrw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] starting shard [views][3], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=x9vJP5wtQvOcvRX2hJ4klA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[I2jjVtHsQEuvpXo8XHqwqQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/NlCIf77XRtq3dgewPHB03g]][2]: allocating [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[f389_ZJ9Qbq2k16Gd31PLw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][4]: allocating [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kPw6C4KcS1yaLM5ZEbWYWA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@538f2464]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ttF8NBKkRdqP58cMygTI9w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@745abed2]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[-NjrKFaSRR2UdvfY0ppwkQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@28db49ac]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[b9q2HnpLTJ-i0bA7NYhJnQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@74b01a88]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [141ms] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[B2DYUVYFS4Ol-s82vMn4jg]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6be8b0f4]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ZIkWRbkYRQm0Ea97FVXoDg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@266a385d]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@71bc2a57]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4fd5245]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@21b989d5]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ddd535e]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4e97e9eb]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@289aaba3]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@132fd69f]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@53a790be]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@699cae78]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4786eb68]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1f01eb1d]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [4], source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [4] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [4], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 4 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 4 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[issues/TBmEmRS7QsKk1yDUlglMag]] creating index 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/TBmEmRS7QsKk1yDUlglMag]], shards [5]/[0] - reason [create index] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:35 DEBUG es[][o.e.i.m.MapperService] [[issues/TBmEmRS7QsKk1yDUlglMag]] added mapping [auth] (source suppressed due to length, use TRACE level if needed) 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][4] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/4] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][4] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/4, shard=[issues][4]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][4] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=goEelBE9R8aKGojZSeILaA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=5VZXg87PTYeyfDpJ1To7hw, translog_generation=5, translog_uuid=jyvLWB8QQHyvsWXmt7oTNw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=goEelBE9R8aKGojZSeILaA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=5VZXg87PTYeyfDpJ1To7hw, translog_generation=5, translog_uuid=jyvLWB8QQHyvsWXmt7oTNw}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [158ms] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][2] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/2] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [views][2] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NlCIf77XRtq3dgewPHB03g/2, shard=[views][2]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [views][2] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=_CS5ogJ2T-OdYP3EE-LZKw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=oKjMvfAyTtqdHkBytVCEEw, translog_generation=5, translog_uuid=O181HxgxTP2_qXAlJ0k3Zw}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=kyO86Ds_TzixSM4S1pZVPg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Kb2Bm_8XReG-H00BlJM_Tw, translog_generation=5, translog_uuid=Xk0ClVGFRWeD3RFe2W8JJA}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=_CS5ogJ2T-OdYP3EE-LZKw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=oKjMvfAyTtqdHkBytVCEEw, translog_generation=5, translog_uuid=O181HxgxTP2_qXAlJ0k3Zw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=_CS5ogJ2T-OdYP3EE-LZKw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=oKjMvfAyTtqdHkBytVCEEw, translog_generation=5, translog_uuid=O181HxgxTP2_qXAlJ0k3Zw}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 4 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [122ms] done applying updated cluster state (version: 4, uuid: Ko9kHQ4ATw-Gey1r7jbGjQ) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [x9vJP5wtQvOcvRX2hJ4klA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [8QfI44XCQRmQ045PT7QLrw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [131ms] done publishing updated cluster state (version: 4, uuid: Ko9kHQ4ATw-Gey1r7jbGjQ) 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] starting shard [views][4], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=_AXuADNOTA6OY4SGdXY03A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] starting shard [views][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=mwAcEDEDS5agNguNPCkF9A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[b9q2HnpLTJ-i0bA7NYhJnQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][3]: allocating [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ttF8NBKkRdqP58cMygTI9w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][0]: allocating [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kPw6C4KcS1yaLM5ZEbWYWA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3d820be2]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[-NjrKFaSRR2UdvfY0ppwkQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1d55d2e2]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[B2DYUVYFS4Ol-s82vMn4jg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5fdfdb0a]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ZIkWRbkYRQm0Ea97FVXoDg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@36fc3aaa]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@fdaa205]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3494529a]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@21ed5e50]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@44bfc2c]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5cfcce7f]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7e4add9c]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1368031e]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@41db77d4]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@561a1516]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@afc7910]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@450a0ab1]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [117ms] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [5], source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [5] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [5], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 5 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 5 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][3] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/3] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][3] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/3, shard=[issues][3]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][3] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][0] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/0] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/0, shard=[issues][0]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][0] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=DrMB2eZXTPa23VjTJ8oH6w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=-e6ZDybpStKQ9RtgV1H_fw, translog_generation=5, translog_uuid=qeGBlSBoSvSWxga5y7pBGQ}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=kyO86Ds_TzixSM4S1pZVPg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Kb2Bm_8XReG-H00BlJM_Tw, translog_generation=5, translog_uuid=Xk0ClVGFRWeD3RFe2W8JJA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=kyO86Ds_TzixSM4S1pZVPg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Kb2Bm_8XReG-H00BlJM_Tw, translog_generation=5, translog_uuid=Xk0ClVGFRWeD3RFe2W8JJA}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [151ms] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=ZK4y2psyQsCkLywzead-_Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=DhC7OdKBRYipvcyPyc3Wrg, translog_generation=5, translog_uuid=D7F5wFFrQMi9IELwgTX__A}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 5 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [125ms] done applying updated cluster state (version: 5, uuid: lCwKe_wbTFGN9M0ncPKOQw) 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [_AXuADNOTA6OY4SGdXY03A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [mwAcEDEDS5agNguNPCkF9A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [132ms] done publishing updated cluster state (version: 5, uuid: lCwKe_wbTFGN9M0ncPKOQw) 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] starting shard [issues][4], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=f389_ZJ9Qbq2k16Gd31PLw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] starting shard [views][2], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=I2jjVtHsQEuvpXo8XHqwqQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:35 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kPw6C4KcS1yaLM5ZEbWYWA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][2]: allocating [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[-NjrKFaSRR2UdvfY0ppwkQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/TBmEmRS7QsKk1yDUlglMag]][1]: allocating [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ZIkWRbkYRQm0Ea97FVXoDg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@47e3f4d]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[B2DYUVYFS4Ol-s82vMn4jg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@30084ee]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4071d74c]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@51d368f4]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2af4377f]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2b9bb9a4]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4e78facd]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@761edd71]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5935fd97]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@299c2529]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1939f66b]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3449caae]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=DrMB2eZXTPa23VjTJ8oH6w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=-e6ZDybpStKQ9RtgV1H_fw, translog_generation=5, translog_uuid=qeGBlSBoSvSWxga5y7pBGQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=DrMB2eZXTPa23VjTJ8oH6w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=-e6ZDybpStKQ9RtgV1H_fw, translog_generation=5, translog_uuid=qeGBlSBoSvSWxga5y7pBGQ}]}] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:35 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3ee5e601]] on primary allocation 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [6], source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [6] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [6], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])] 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 6 2019.08.23 00:47:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 6 2019.08.23 00:47:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][2] creating shard with primary term [25] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/2] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] [issues][2] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/2, shard=[issues][2]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][2] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [135ms] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:35 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:35 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][1] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [issues][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [issues][1] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TBmEmRS7QsKk1yDUlglMag/1, shard=[issues][1]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=-Su4KcB7RJmCDhnajSzmAA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=70-MfE-pRnONIqwCdTP9Fw, translog_generation=5, translog_uuid=L3GyZQKwSKSqfTr8V34m7g}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=ZK4y2psyQsCkLywzead-_Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=DhC7OdKBRYipvcyPyc3Wrg, translog_generation=5, translog_uuid=D7F5wFFrQMi9IELwgTX__A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=ZK4y2psyQsCkLywzead-_Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=DhC7OdKBRYipvcyPyc3Wrg, translog_generation=5, translog_uuid=D7F5wFFrQMi9IELwgTX__A}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [158ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=3-95nxNqQy2h-ngrR_NHRQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SHuExBpKS3ygyjOipkr13A, translog_generation=5, translog_uuid=vBBGTyCmQVykBx-Oq_Czvg}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 6 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [117ms] done applying updated cluster state (version: 6, uuid: Q7XQrsKySUinZ7K7ZV5Ozw) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [I2jjVtHsQEuvpXo8XHqwqQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [f389_ZJ9Qbq2k16Gd31PLw], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [124ms] done publishing updated cluster state (version: 6, uuid: Q7XQrsKySUinZ7K7ZV5Ozw) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] starting shard [issues][3], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=b9q2HnpLTJ-i0bA7NYhJnQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] starting shard [issues][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=ttF8NBKkRdqP58cMygTI9w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[ZIkWRbkYRQm0Ea97FVXoDg]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][0]: allocating [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[B2DYUVYFS4Ol-s82vMn4jg]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/_U9_HYtTTOmGosf1Chf5TQ]][1]: allocating [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1800f904]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2f2546a3]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@605e1284]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@50cb1523]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@774a4b8a]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@58529c56]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1f6dac07]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4475d06f]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@792df68b]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@51922c71]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5fa4d74f]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=-Su4KcB7RJmCDhnajSzmAA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=70-MfE-pRnONIqwCdTP9Fw, translog_generation=5, translog_uuid=L3GyZQKwSKSqfTr8V34m7g}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=-Su4KcB7RJmCDhnajSzmAA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=70-MfE-pRnONIqwCdTP9Fw, translog_generation=5, translog_uuid=L3GyZQKwSKSqfTr8V34m7g}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [7], source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [7] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [7], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 7 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 7 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[rules/_U9_HYtTTOmGosf1Chf5TQ]] creating index 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/_U9_HYtTTOmGosf1Chf5TQ]], shards [2]/[0] - reason [create index] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [126ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] [[rules/_U9_HYtTTOmGosf1Chf5TQ]] added mapping [rule], source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_id":{"type":"keyword"},"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":["activeRule","ruleExtension"]}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleExt_scope":{"type":"keyword"},"ruleExt_tags":{"type":"keyword","norms":true},"ruleId":{"type":"keyword"},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=3-95nxNqQy2h-ngrR_NHRQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SHuExBpKS3ygyjOipkr13A, translog_generation=5, translog_uuid=vBBGTyCmQVykBx-Oq_Czvg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=3-95nxNqQy2h-ngrR_NHRQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SHuExBpKS3ygyjOipkr13A, translog_generation=5, translog_uuid=vBBGTyCmQVykBx-Oq_Czvg}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][1] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [rules][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [rules][1] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/1, shard=[rules][1]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [135ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][0] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [rules][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [rules][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/_U9_HYtTTOmGosf1Chf5TQ/0, shard=[rules][0]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=iViJbWJMQ0moBL-K1N9wUQ, local_checkpoint=12731, max_seq_no=12731, max_unsafe_auto_id_timestamp=-1, sync_id=R8CDbjIFTjC9--wbJSv7WA, translog_generation=7, translog_uuid=1VIZNCe7Qeyv6cAvf6SlPw}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12731, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=GqO08PlGTF6n_Tq9iuZSCg, local_checkpoint=12609, max_seq_no=12609, max_unsafe_auto_id_timestamp=-1, sync_id=55NCqmPWSM-vXYsLXpNw8w, translog_generation=7, translog_uuid=0CcZF_t1Q8Kc5qSmJBYBMQ}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_5], userData[{history_uuid=iViJbWJMQ0moBL-K1N9wUQ, local_checkpoint=12731, max_seq_no=12731, max_unsafe_auto_id_timestamp=-1, sync_id=R8CDbjIFTjC9--wbJSv7WA, translog_generation=7, translog_uuid=1VIZNCe7Qeyv6cAvf6SlPw}]}], last commit [CommitPoint{segment[segments_5], userData[{history_uuid=iViJbWJMQ0moBL-K1N9wUQ, local_checkpoint=12731, max_seq_no=12731, max_unsafe_auto_id_timestamp=-1, sync_id=R8CDbjIFTjC9--wbJSv7WA, translog_generation=7, translog_uuid=1VIZNCe7Qeyv6cAvf6SlPw}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 7 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [135ms] done applying updated cluster state (version: 7, uuid: Yen35YB-Q8iqvftW3D8FsQ) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [b9q2HnpLTJ-i0bA7NYhJnQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [ttF8NBKkRdqP58cMygTI9w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [142ms] done publishing updated cluster state (version: 7, uuid: Yen35YB-Q8iqvftW3D8FsQ) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] starting shard [issues][2], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=kPw6C4KcS1yaLM5ZEbWYWA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] starting shard [issues][1], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=-NjrKFaSRR2UdvfY0ppwkQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.275Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[CXRhRnXoQhyfNBmS5Y5JeQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][3]: allocating [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[QiBlwaS8S4irZL3_H2RMTQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][1]: allocating [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@57fd660d]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@386da56d]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@722f94a0]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@36420eff]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@457b43f]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4db3535c]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@63e2513e]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5b3a1a01]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1f385b95]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=12609, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [8], source [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [8] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [8], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 8 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 8 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]] creating index 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]], shards [5]/[0] - reason [create index] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"long"},"auth_userIds":{"type":"long"},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"organizationUuid":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [129ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_5], userData[{history_uuid=GqO08PlGTF6n_Tq9iuZSCg, local_checkpoint=12609, max_seq_no=12609, max_unsafe_auto_id_timestamp=-1, sync_id=55NCqmPWSM-vXYsLXpNw8w, translog_generation=7, translog_uuid=0CcZF_t1Q8Kc5qSmJBYBMQ}]}], last commit [CommitPoint{segment[segments_5], userData[{history_uuid=GqO08PlGTF6n_Tq9iuZSCg, local_checkpoint=12609, max_seq_no=12609, max_unsafe_auto_id_timestamp=-1, sync_id=55NCqmPWSM-vXYsLXpNw8w, translog_generation=7, translog_uuid=0CcZF_t1Q8Kc5qSmJBYBMQ}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][1] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/1, shard=[projectmeasures][1]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [135ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][3] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/3] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/3, shard=[projectmeasures][3]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][3] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=5A-CUdRNT2qVlYOUmoT5_g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zMqBWdg8RHqsLpLGeh_NCw, translog_generation=5, translog_uuid=eZ91edTzQBG7GNjFBQH4Ug}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=Z43FxaNmSISQb6Z2285UZA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=cCYom3nRQFifqOnlxd9irQ, translog_generation=5, translog_uuid=Qdj_u3HPS0mTeeOjCPBitA}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=5A-CUdRNT2qVlYOUmoT5_g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zMqBWdg8RHqsLpLGeh_NCw, translog_generation=5, translog_uuid=eZ91edTzQBG7GNjFBQH4Ug}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=5A-CUdRNT2qVlYOUmoT5_g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=zMqBWdg8RHqsLpLGeh_NCw, translog_generation=5, translog_uuid=eZ91edTzQBG7GNjFBQH4Ug}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [103ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 8 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [162ms] done applying updated cluster state (version: 8, uuid: JWDXxzVqRa-5-KYZyZ3bmg) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [kPw6C4KcS1yaLM5ZEbWYWA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [-NjrKFaSRR2UdvfY0ppwkQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [168ms] done publishing updated cluster state (version: 8, uuid: JWDXxzVqRa-5-KYZyZ3bmg) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] starting shard [rules][1], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=B2DYUVYFS4Ol-s82vMn4jg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] starting shard [rules][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=ZIkWRbkYRQm0Ea97FVXoDg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] starting shard [projectmeasures][1], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=QiBlwaS8S4irZL3_H2RMTQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[--xK9rstQdO7WRkQj4n8rA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][2]: allocating [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[krkR_pDSSVKdPumpgwqdpg]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][0]: allocating [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[eONJIbZZRCKBiYYhHXmm_Q]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/NvSKMglOTsSilsvC1yvZQA]][4]: allocating [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@51198f12]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5ad1d276]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@17865528]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2b431ed]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7c4cc32e]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@19e9a183]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [9], source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [9] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [9], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 9 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 9 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][2] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/2] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/2, shard=[projectmeasures][2]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][2] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=SG_SgC2QSqCdBfhEHiqG7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Qb3AM6wSSmyrboxB7507GQ, translog_generation=5, translog_uuid=LgiRGq73SrCydi94V6FvBg}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=Z43FxaNmSISQb6Z2285UZA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=cCYom3nRQFifqOnlxd9irQ, translog_generation=5, translog_uuid=Qdj_u3HPS0mTeeOjCPBitA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=Z43FxaNmSISQb6Z2285UZA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=cCYom3nRQFifqOnlxd9irQ, translog_generation=5, translog_uuid=Qdj_u3HPS0mTeeOjCPBitA}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][4] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/4] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/4, shard=[projectmeasures][4]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][4] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [139ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][0] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/NvSKMglOTsSilsvC1yvZQA/0, shard=[projectmeasures][0]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=WV3X7VxeT4-ZhEgUSB11Wg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=trvpkdJaTbuaNZW3bnfAcA, translog_generation=5, translog_uuid=PRHDt9t_SFuz-5bD-BEHDA}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=SG_SgC2QSqCdBfhEHiqG7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Qb3AM6wSSmyrboxB7507GQ, translog_generation=5, translog_uuid=LgiRGq73SrCydi94V6FvBg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=SG_SgC2QSqCdBfhEHiqG7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Qb3AM6wSSmyrboxB7507GQ, translog_generation=5, translog_uuid=LgiRGq73SrCydi94V6FvBg}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=t3vHAAXjQl-K4179vt_QrQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9ZGp54QvRqSsqU_orwcWOA, translog_generation=5, translog_uuid=oC2MsNyZSAGymoo0wwBJSg}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [114ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 9 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [156ms] done applying updated cluster state (version: 9, uuid: Bk6HJb-8QWKGVfwWFrZ1dA) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [QiBlwaS8S4irZL3_H2RMTQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [ZIkWRbkYRQm0Ea97FVXoDg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [B2DYUVYFS4Ol-s82vMn4jg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [160ms] done publishing updated cluster state (version: 9, uuid: Bk6HJb-8QWKGVfwWFrZ1dA) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] starting shard [projectmeasures][3], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=CXRhRnXoQhyfNBmS5Y5JeQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] starting shard [projectmeasures][2], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=--xK9rstQdO7WRkQj4n8rA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[XI7oVT9BTxi3pwBUczdWhA]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][4]: allocating [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[u9PoXBz7RnmiYFrSp0RY2A]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][0]: allocating [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6d9f3748]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@d48231b]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@68a16187]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@367605a4]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [10], source [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [10] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [10], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 10 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 10 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[components/TPwmMZYuQQqmtVRoqx4Uwg]] creating index 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndicesService] creating Index [[components/TPwmMZYuQQqmtVRoqx4Uwg]], shards [5]/[0] - reason [create index] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=WV3X7VxeT4-ZhEgUSB11Wg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=trvpkdJaTbuaNZW3bnfAcA, translog_generation=5, translog_uuid=PRHDt9t_SFuz-5bD-BEHDA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=WV3X7VxeT4-ZhEgUSB11Wg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=trvpkdJaTbuaNZW3bnfAcA, translog_generation=5, translog_uuid=PRHDt9t_SFuz-5bD-BEHDA}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] [[components/TPwmMZYuQQqmtVRoqx4Uwg]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"long"},"auth_userIds":{"type":"long"},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"language":{"type":"keyword"},"name":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"fielddata":true},"organization_uuid":{"type":"keyword"},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][4] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][4] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/4], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/4] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][4] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/4, shard=[components][4]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][4] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [142ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][0] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/0, shard=[components][0]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=gUnMKg6IQ1SLjyqsy38uqg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=vVdqgrGBS9243QAkXzj3Lw, translog_generation=5, translog_uuid=fb9iNMg3SXSr-fGlATxNIQ}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=t3vHAAXjQl-K4179vt_QrQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9ZGp54QvRqSsqU_orwcWOA, translog_generation=5, translog_uuid=oC2MsNyZSAGymoo0wwBJSg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=t3vHAAXjQl-K4179vt_QrQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=9ZGp54QvRqSsqU_orwcWOA, translog_generation=5, translog_uuid=oC2MsNyZSAGymoo0wwBJSg}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [150ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=65IcQFneRM2XYqZJCMCf4Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=TzyX9RQqQq--UTvn428zhQ, translog_generation=5, translog_uuid=lkmmlp4yTd6asgHMCo160A}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=gUnMKg6IQ1SLjyqsy38uqg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=vVdqgrGBS9243QAkXzj3Lw, translog_generation=5, translog_uuid=fb9iNMg3SXSr-fGlATxNIQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=gUnMKg6IQ1SLjyqsy38uqg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=vVdqgrGBS9243QAkXzj3Lw, translog_generation=5, translog_uuid=fb9iNMg3SXSr-fGlATxNIQ}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 10 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [116ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [136ms] done applying updated cluster state (version: 10, uuid: EjPGLPvtTXyZyjsJaaAB7Q) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [--xK9rstQdO7WRkQj4n8rA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [CXRhRnXoQhyfNBmS5Y5JeQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [139ms] done publishing updated cluster state (version: 10, uuid: EjPGLPvtTXyZyjsJaaAB7Q) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] starting shard [projectmeasures][4], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=eONJIbZZRCKBiYYhHXmm_Q], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] starting shard [projectmeasures][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=krkR_pDSSVKdPumpgwqdpg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IGCiyzVrTLmkiG8ioymnHQ]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][2]: allocating [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kldfU2lRR7G6bysmQ2F7nA]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][3]: allocating [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1a347673]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5d4973cf]] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [11], source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [11] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [11], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 11 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 11 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][2] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][2] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/2], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/2] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][2] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/2, shard=[components][2]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][2] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][3] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][3] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/3], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/3] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][3] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/3, shard=[components][3]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][3] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=q4gnqFltRYeXLFlLIgb6dA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=2U1zIwjrSVqI5RQw01iswQ, translog_generation=5, translog_uuid=HtHM50pwR-iXG5x-mZ4Rkw}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=65IcQFneRM2XYqZJCMCf4Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=TzyX9RQqQq--UTvn428zhQ, translog_generation=5, translog_uuid=lkmmlp4yTd6asgHMCo160A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=65IcQFneRM2XYqZJCMCf4Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=TzyX9RQqQq--UTvn428zhQ, translog_generation=5, translog_uuid=lkmmlp4yTd6asgHMCo160A}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [150ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=PnO9tYS2QH-PRbY8aNf3OA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=yHPZCRBfT9-eBAZRyZBEMg, translog_generation=5, translog_uuid=gcaa83PyRX-RBH8mmOvjWw}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 11 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [120ms] done applying updated cluster state (version: 11, uuid: dV-7EQYkSCKymeGcbiMAOg) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [krkR_pDSSVKdPumpgwqdpg], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [eONJIbZZRCKBiYYhHXmm_Q], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [123ms] done publishing updated cluster state (version: 11, uuid: dV-7EQYkSCKymeGcbiMAOg) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] starting shard [components][4], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=XI7oVT9BTxi3pwBUczdWhA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] starting shard [components][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=u9PoXBz7RnmiYFrSp0RY2A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[aoX1x544Q9KShyYCx_iIGw]] 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/TPwmMZYuQQqmtVRoqx4Uwg]][1]: allocating [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EWVVoXKtReC01s30YTT92w]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]][0]: allocating [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [12], source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [12] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [12], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 12 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 12 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]] creating index 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]], shards [1]/[0] - reason [create index] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=q4gnqFltRYeXLFlLIgb6dA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=2U1zIwjrSVqI5RQw01iswQ, translog_generation=5, translog_uuid=HtHM50pwR-iXG5x-mZ4Rkw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=q4gnqFltRYeXLFlLIgb6dA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=2U1zIwjrSVqI5RQw01iswQ, translog_generation=5, translog_uuid=HtHM50pwR-iXG5x-mZ4Rkw}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] using dynamic[true] 2019.08.23 00:47:36 DEBUG es[][o.e.i.m.MapperService] [[metadatas/HtzuEzqcRAank_tKjNQ0NQ]] added mapping [metadata], source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [129ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][1] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][1] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/1], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [components][1] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/TPwmMZYuQQqmtVRoqx4Uwg/1, shard=[components][1]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][1] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=jp1HL9TORiKpnfvZ35wvnQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=U5GsSkFqQz-gYD12egBtLg, translog_generation=5, translog_uuid=klUh3xdHTdSRhldukEJMfQ}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=PnO9tYS2QH-PRbY8aNf3OA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=yHPZCRBfT9-eBAZRyZBEMg, translog_generation=5, translog_uuid=gcaa83PyRX-RBH8mmOvjWw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=PnO9tYS2QH-PRbY8aNf3OA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=yHPZCRBfT9-eBAZRyZBEMg, translog_generation=5, translog_uuid=gcaa83PyRX-RBH8mmOvjWw}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=25, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [metadatas][0] creating shard with primary term [25] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [metadatas][0] loaded data path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/HtzuEzqcRAank_tKjNQ0NQ/0], state path [/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/HtzuEzqcRAank_tKjNQ0NQ/0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] [metadatas][0] creating using an existing path [ShardPath{path=/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0/indices/HtzuEzqcRAank_tKjNQ0NQ/0, shard=[metadatas][0]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.IndexService] creating shard_id [metadatas][0] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [148ms] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=PwW9-bwIQGikpGb4s_k5Hw, local_checkpoint=31, max_seq_no=31, max_unsafe_auto_id_timestamp=-1, sync_id=qNtkF5nQRZCQhWTDBLYSUA, translog_generation=12, translog_uuid=u_vI52rlTH2mDYxMph3LPA}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=jp1HL9TORiKpnfvZ35wvnQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=U5GsSkFqQz-gYD12egBtLg, translog_generation=5, translog_uuid=klUh3xdHTdSRhldukEJMfQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=jp1HL9TORiKpnfvZ35wvnQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=U5GsSkFqQz-gYD12egBtLg, translog_generation=5, translog_uuid=klUh3xdHTdSRhldukEJMfQ}]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [105ms] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=27, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=31, minTranslogGeneration=12, trimmedAboveSeqNo=-2} 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 12 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [147ms] done applying updated cluster state (version: 12, uuid: YBBxUDipSjq7lPH1b1vGag) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [XI7oVT9BTxi3pwBUczdWhA], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [u9PoXBz7RnmiYFrSp0RY2A], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [150ms] done publishing updated cluster state (version: 12, uuid: YBBxUDipSjq7lPH1b1vGag) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] starting shard [components][2], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=IGCiyzVrTLmkiG8ioymnHQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] starting shard [components][3], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=kldfU2lRR7G6bysmQ2F7nA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] starting shard [components][1], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=aoX1x544Q9KShyYCx_iIGw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [13], source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [13] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [13], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])] 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 13 2019.08.23 00:47:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 13 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:36 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_6], userData[{history_uuid=PwW9-bwIQGikpGb4s_k5Hw, local_checkpoint=31, max_seq_no=31, max_unsafe_auto_id_timestamp=-1, sync_id=qNtkF5nQRZCQhWTDBLYSUA, translog_generation=12, translog_uuid=u_vI52rlTH2mDYxMph3LPA}]}], last commit [CommitPoint{segment[segments_6], userData[{history_uuid=PwW9-bwIQGikpGb4s_k5Hw, local_checkpoint=31, max_seq_no=31, max_unsafe_auto_id_timestamp=-1, sync_id=qNtkF5nQRZCQhWTDBLYSUA, translog_generation=12, translog_uuid=u_vI52rlTH2mDYxMph3LPA}]}] 2019.08.23 00:47:37 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2019.08.23 00:47:37 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [136ms] 2019.08.23 00:47:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [I1tMMakFQYSWwmrZ3iaweQ] for shard entry [StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] received shard started for [StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}] 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 13 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [60ms] done applying updated cluster state (version: 13, uuid: 4SeJldtrSlK1APVvrFS0KQ) 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [kldfU2lRR7G6bysmQ2F7nA], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [aoX1x544Q9KShyYCx_iIGw], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [IGCiyzVrTLmkiG8ioymnHQ], primary term [25], message [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [63ms] done publishing updated cluster state (version: 13, uuid: 4SeJldtrSlK1APVvrFS0KQ) 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute 2019.08.23 00:47:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] starting shard [metadatas][0], node[I1tMMakFQYSWwmrZ3iaweQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=EWVVoXKtReC01s30YTT92w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-08-22T22:47:35.277Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2019.08.23 00:47:37 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]] ...]). 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [14], source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [14] 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [14], source [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 14 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 14 2019.08.23 00:47:37 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 14 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{I1tMMakFQYSWwmrZ3iaweQ}{nXAaE_30TqGEWmINT12LvA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [21ms] done applying updated cluster state (version: 14, uuid: g85qF9sFSkiaRF-Pn6l2qg) 2019.08.23 00:47:37 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [EWVVoXKtReC01s30YTT92w], primary term [25], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [23ms] done publishing updated cluster state (version: 14, uuid: g85qF9sFSkiaRF-Pn6l2qg) 2019.08.23 00:48:05 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [I1tMMakFQYSWwmrZ3iaweQ][sonarqube][/usr/local/Cellar/sonarqube/7.9.1/libexec/data/es6/nodes/0] free: 86.7gb[4.6%], all indices on this node will be marked read-only 2019.08.23 00:48:06 INFO es[][o.e.n.Node] stopping ... 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [rules] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [rules/_U9_HYtTTOmGosf1Chf5TQ] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [metadatas] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [metadatas/HtzuEzqcRAank_tKjNQ0NQ] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [projectmeasures] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [projectmeasures/NvSKMglOTsSilsvC1yvZQA] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [components] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [components/TPwmMZYuQQqmtVRoqx4Uwg] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [views] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [views/NlCIf77XRtq3dgewPHB03g] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [metadatas/HtzuEzqcRAank_tKjNQ0NQ] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [issues] closing ... (reason [NO_LONGER_ASSIGNED]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [issues/TBmEmRS7QsKk1yDUlglMag] closing index service (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [rules/_U9_HYtTTOmGosf1Chf5TQ] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [views/NlCIf77XRtq3dgewPHB03g] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [components/TPwmMZYuQQqmtVRoqx4Uwg] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [projectmeasures/NvSKMglOTsSilsvC1yvZQA] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2019.08.23 00:48:06 DEBUG es[][o.e.i.t.Translog] translog closed 2019.08.23 00:48:06 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2019.08.23 00:48:06 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2019.08.23 00:48:06 DEBUG es[][o.e.i.IndicesService] [issues/TBmEmRS7QsKk1yDUlglMag] closed... (reason [NO_LONGER_ASSIGNED][shutdown]) 2019.08.23 00:48:06 INFO es[][o.e.n.Node] stopped 2019.08.23 00:48:06 INFO es[][o.e.n.Node] closing ... 2019.08.23 00:48:06 INFO es[][o.e.n.Node] closed