2022.11.10 13:04:18 INFO es[][o.e.n.Node] version[7.17.5], pid[2835], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 13:04:18 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 13:04:18 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 13:04:29 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 13:04:32 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [19.8gb], net total_space [51gb], types [ext4] 2022.11.10 13:04:32 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 13:04:33 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 13:05:54 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 13:05:54 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 13:05:55 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 13:05:58 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 13:06:01 INFO es[][o.e.n.Node] initialized 2022.11.10 13:06:01 INFO es[][o.e.n.Node] starting ... 2022.11.10 13:06:03 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:42683}, bound_addresses {127.0.0.1:42683} 2022.11.10 13:06:05 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 13:06:05 INFO es[][o.e.c.c.Coordinator] setting initial configuration to VotingConfiguration{NoJ4WfHARK-DEgu5GKXOeg} 2022.11.10 13:06:07 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{wmIhitOHSzui5gEwN5mp1A}{127.0.0.1}{127.0.0.1:42683}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 1, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{wmIhitOHSzui5gEwN5mp1A}{127.0.0.1}{127.0.0.1:42683}{cdfhimrsw}]} 2022.11.10 13:06:08 INFO es[][o.e.c.c.CoordinationState] cluster UUID set to [pThM6foASlO2PdfSbEivXw] 2022.11.10 13:06:08 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{wmIhitOHSzui5gEwN5mp1A}{127.0.0.1}{127.0.0.1:42683}{cdfhimrsw}]}, term: 2, version: 1, reason: Publication{term=2, version=1} 2022.11.10 13:06:08 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 13:06:08 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state 2022.11.10 13:06:08 INFO es[][o.e.n.Node] started 2022.11.10 13:19:41 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][young][779][24] duration [2.2s], collections [1]/[1s], total [2.2s]/[5.5s], memory [80mb]->[80mb]/[512mb], all_pools {[young] [40mb]->[40mb]/[0b]}{[old] [37mb]->[37mb]/[512mb]}{[survivor] [3mb]->[3mb]/[0b]} 2022.11.10 13:19:41 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][779] overhead, spent [2.2s] collecting in the last [1s] 2022.11.10 13:19:50 WARN es[][o.e.t.ThreadPool] execution of [org.elasticsearch.cluster.InternalClusterInfoService$RefreshScheduler$$Lambda$2975/0x00000001009d8440@798a0489] took [5862ms] which is above the warn threshold of [5000ms] 2022.11.10 13:19:50 WARN es[][o.e.t.ThreadPool] timer thread slept for [5.8s/5863ms] on absolute clock which is above the warn threshold of [5000ms] 2022.11.10 13:19:50 WARN es[][o.e.t.ThreadPool] timer thread slept for [5.8s/5862854388ns] on relative clock which is above the warn threshold of [5000ms] 2022.11.10 13:25:07 WARN es[][o.e.t.ThreadPool] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@42b5b6f7, interval=1s}] took [6538ms] which is above the warn threshold of [5000ms] 2022.11.10 13:27:14 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][young][1200][25] duration [953ms], collections [1]/[1.8s], total [953ms]/[6.4s], memory [71.7mb]->[53.8mb]/[512mb], all_pools {[young] [20mb]->[0b]/[0b]}{[old] [47.7mb]->[50.8mb]/[512mb]}{[survivor] [4mb]->[3mb]/[0b]} 2022.11.10 13:27:14 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][1200] overhead, spent [953ms] collecting in the last [1.8s] 2022.11.10 13:27:17 INFO es[][o.e.c.m.MetadataCreateIndexService] [metadatas] creating index, cause [api], templates [], shards [1]/[0] 2022.11.10 13:27:18 WARN es[][o.e.c.s.MasterService] took [15.1s/15127ms] to compute cluster state update for [create-index [metadatas], cause [api]], which exceeds the warn threshold of [10s] 2022.11.10 13:27:25 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 13:27:29 INFO es[][o.e.c.m.MetadataMappingService] [metadatas/N-wJ8qPTTTyTYoXbXg3F8g] create_mapping [metadata] 2022.11.10 13:27:36 INFO es[][o.e.c.m.MetadataCreateIndexService] [components] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 13:27:42 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[components][4]]]). 2022.11.10 13:27:45 INFO es[][o.e.c.m.MetadataMappingService] [components/z8jTFy28Rq2m0AWGxQGyuw] create_mapping [auth] 2022.11.10 13:27:49 INFO es[][o.e.c.m.MetadataCreateIndexService] [projectmeasures] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 13:27:53 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[projectmeasures][4]]]). 2022.11.10 13:27:55 INFO es[][o.e.c.m.MetadataMappingService] [projectmeasures/SWp38y_dTeW_i3HApsNVsQ] create_mapping [auth] 2022.11.10 13:27:57 INFO es[][o.e.c.m.MetadataCreateIndexService] [rules] creating index, cause [api], templates [], shards [2]/[0] 2022.11.10 13:27:59 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[rules][1]]]). 2022.11.10 13:28:01 INFO es[][o.e.c.m.MetadataMappingService] [rules/ZPzru4r4QR6_c8p1MyrcNg] create_mapping [rule] 2022.11.10 13:28:05 INFO es[][o.e.c.m.MetadataCreateIndexService] [issues] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 13:28:17 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[issues][4]]]). 2022.11.10 13:28:18 INFO es[][o.e.c.m.MetadataMappingService] [issues/VZ6DTALkToeQgIcEr8PrtQ] create_mapping [auth] 2022.11.10 13:28:20 INFO es[][o.e.c.m.MetadataCreateIndexService] [users] creating index, cause [api], templates [], shards [1]/[0] 2022.11.10 13:28:23 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[users][0]]]). 2022.11.10 13:28:23 INFO es[][o.e.c.m.MetadataMappingService] [users/qbsuDVZZRrqTwp7HSmmTgw] create_mapping [user] 2022.11.10 13:28:26 INFO es[][o.e.c.m.MetadataCreateIndexService] [views] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 13:28:30 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[views][4]]]). 2022.11.10 13:28:31 INFO es[][o.e.c.m.MetadataMappingService] [views/Dv2T0qmGRX2UjXF3FDCAMw] create_mapping [view] 2022.11.10 13:34:04 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][young][1592][34] duration [914ms], collections [1]/[2s], total [914ms]/[8.4s], memory [83.8mb]->[64.5mb]/[512mb], all_pools {[young] [22mb]->[0b]/[0b]}{[old] [59.8mb]->[60.5mb]/[512mb]}{[survivor] [2mb]->[4mb]/[0b]} 2022.11.10 13:34:04 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][1592] overhead, spent [914ms] collecting in the last [2s] 2022.11.10 13:34:32 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][1617] overhead, spent [347ms] collecting in the last [1.2s] 2022.11.10 13:35:57 INFO es[][o.e.n.Node] stopping ... 2022.11.10 13:35:58 WARN es[][o.e.c.InternalClusterInfoService] failed to retrieve stats for node [NoJ4WfHARK-DEgu5GKXOeg]: [sonarqube][127.0.0.1:42683][cluster:monitor/nodes/stats[n]] 2022.11.10 13:35:58 WARN es[][o.e.c.InternalClusterInfoService] failed to retrieve shard stats from node [NoJ4WfHARK-DEgu5GKXOeg]: [sonarqube][127.0.0.1:42683][indices:monitor/stats[n]] 2022.11.10 13:35:59 INFO es[][o.e.n.Node] stopped 2022.11.10 13:35:59 INFO es[][o.e.n.Node] closing ... 2022.11.10 13:35:59 INFO es[][o.e.n.Node] closed 2022.11.10 13:54:00 INFO es[][o.e.n.Node] version[7.17.5], pid[1783], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 13:54:00 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 13:54:00 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 13:54:11 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 13:54:12 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [19.8gb], net total_space [51gb], types [ext4] 2022.11.10 13:54:12 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 13:54:17 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 13:55:45 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 13:55:46 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 13:55:47 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 13:55:55 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 13:55:58 INFO es[][o.e.n.Node] initialized 2022.11.10 13:55:58 INFO es[][o.e.n.Node] starting ... 2022.11.10 13:55:59 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:45661}, bound_addresses {127.0.0.1:45661} 2022.11.10 13:56:16 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][15] overhead, spent [532ms] collecting in the last [1.3s] 2022.11.10 13:56:18 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 13:56:18 INFO es[][o.e.c.c.Coordinator] cluster UUID [pThM6foASlO2PdfSbEivXw] 2022.11.10 13:56:20 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pXIookL1QH6yNVMbpuq5jA}{127.0.0.1}{127.0.0.1:45661}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 37, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pXIookL1QH6yNVMbpuq5jA}{127.0.0.1}{127.0.0.1:45661}{cdfhimrsw}]} 2022.11.10 13:56:23 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pXIookL1QH6yNVMbpuq5jA}{127.0.0.1}{127.0.0.1:45661}{cdfhimrsw}]}, term: 3, version: 37, reason: Publication{term=3, version=37} 2022.11.10 13:56:25 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 13:56:25 INFO es[][o.e.n.Node] started 2022.11.10 13:56:26 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2022.11.10 13:56:44 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][42] overhead, spent [426ms] collecting in the last [1s] 2022.11.10 13:56:58 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 13:58:38 INFO es[][o.e.n.Node] stopping ... 2022.11.10 13:58:45 INFO es[][o.e.n.Node] stopped 2022.11.10 13:58:45 INFO es[][o.e.n.Node] closing ... 2022.11.10 13:58:45 INFO es[][o.e.n.Node] closed 2022.11.10 14:03:38 INFO es[][o.e.n.Node] version[7.17.5], pid[2589], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 14:03:38 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 14:03:38 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 14:03:53 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 14:03:53 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 14:03:53 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 14:03:53 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 14:03:54 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 14:03:54 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 14:03:56 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [19.9gb], net total_space [51gb], types [ext4] 2022.11.10 14:03:56 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 14:04:00 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 14:05:46 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 14:05:47 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 14:05:50 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 14:06:07 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 14:06:16 INFO es[][o.e.n.Node] initialized 2022.11.10 14:06:16 INFO es[][o.e.n.Node] starting ... 2022.11.10 14:06:18 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][1] overhead, spent [632ms] collecting in the last [1s] 2022.11.10 14:06:24 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:40319}, bound_addresses {127.0.0.1:40319} 2022.11.10 14:06:35 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][16] overhead, spent [361ms] collecting in the last [1.1s] 2022.11.10 14:06:45 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 14:06:45 INFO es[][o.e.c.c.Coordinator] cluster UUID [pThM6foASlO2PdfSbEivXw] 2022.11.10 14:06:49 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{_fabz8aFT_qIfpldORnj3w}{127.0.0.1}{127.0.0.1:40319}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 6, version: 60, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{_fabz8aFT_qIfpldORnj3w}{127.0.0.1}{127.0.0.1:40319}{cdfhimrsw}]} 2022.11.10 14:06:50 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{_fabz8aFT_qIfpldORnj3w}{127.0.0.1}{127.0.0.1:40319}{cdfhimrsw}]}, term: 6, version: 60, reason: Publication{term=6, version=60} 2022.11.10 14:06:52 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][32] overhead, spent [284ms] collecting in the last [1s] 2022.11.10 14:06:56 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 14:06:56 INFO es[][o.e.n.Node] started 2022.11.10 14:07:01 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2022.11.10 14:07:29 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][67] overhead, spent [291ms] collecting in the last [1s] 2022.11.10 14:07:44 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 14:11:40 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][young][301][32] duration [2s], collections [1]/[1s], total [2s]/[7.3s], memory [88.7mb]->[88.7mb]/[512mb], all_pools {[young] [27mb]->[27mb]/[0b]}{[old] [58.7mb]->[58.7mb]/[512mb]}{[survivor] [3mb]->[3mb]/[0b]} 2022.11.10 14:11:41 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][301] overhead, spent [2s] collecting in the last [1s] 2022.11.10 14:14:05 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][439] overhead, spent [579ms] collecting in the last [1s] 2022.11.10 14:14:36 INFO es[][o.e.n.Node] stopping ... 2022.11.10 14:14:37 INFO es[][o.e.n.Node] stopped 2022.11.10 14:14:37 INFO es[][o.e.n.Node] closing ... 2022.11.10 14:14:37 INFO es[][o.e.n.Node] closed 2022.11.10 14:32:08 INFO es[][o.e.n.Node] version[7.17.5], pid[1696], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 14:32:08 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 14:32:08 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 14:32:20 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 14:32:21 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [19.8gb], net total_space [51gb], types [ext4] 2022.11.10 14:32:21 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 14:32:25 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 14:33:51 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 14:33:52 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 14:33:53 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 14:34:03 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 14:34:09 INFO es[][o.e.n.Node] initialized 2022.11.10 14:34:09 INFO es[][o.e.n.Node] starting ... 2022.11.10 14:34:15 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:35453}, bound_addresses {127.0.0.1:35453} 2022.11.10 14:34:27 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 14:34:27 INFO es[][o.e.c.c.Coordinator] cluster UUID [pThM6foASlO2PdfSbEivXw] 2022.11.10 14:34:28 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pKfykJfWRD-g96j9ClgRgg}{127.0.0.1}{127.0.0.1:35453}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 82, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pKfykJfWRD-g96j9ClgRgg}{127.0.0.1}{127.0.0.1:35453}{cdfhimrsw}]} 2022.11.10 14:34:30 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{pKfykJfWRD-g96j9ClgRgg}{127.0.0.1}{127.0.0.1:35453}{cdfhimrsw}]}, term: 7, version: 82, reason: Publication{term=7, version=82} 2022.11.10 14:34:30 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 14:34:30 INFO es[][o.e.n.Node] started 2022.11.10 14:34:33 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2022.11.10 14:34:37 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][25] overhead, spent [397ms] collecting in the last [1.3s] 2022.11.10 14:34:53 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][40] overhead, spent [301ms] collecting in the last [1s] 2022.11.10 14:35:05 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 14:36:47 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][148] overhead, spent [291ms] collecting in the last [1s] 2022.11.10 14:42:10 INFO es[][o.e.n.Node] stopping ... 2022.11.10 14:42:11 INFO es[][o.e.n.Node] stopped 2022.11.10 14:42:11 INFO es[][o.e.n.Node] closing ... 2022.11.10 14:42:12 INFO es[][o.e.n.Node] closed 2022.11.10 17:07:14 DEBUG es[][o.e.b.SystemCallFilter] Linux seccomp filter installation successful, threads: [all] 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] java.class.path: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] sun.boot.class.path: null 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:18 DEBUG es[][o.e.c.n.IfConfig] configuration: lo inet 127.0.0.1 netmask:255.0.0.0 scope:host inet6 ::1 prefixlen:128 scope:host UP LOOPBACK mtu:65536 index:1 eth0 inet 192.168.161.171 netmask:255.255.255.0 broadcast:192.168.161.255 scope:site hardware 00:50:56:A3:2C:79 UP MULTICAST mtu:1500 index:2 2022.11.10 17:07:22 INFO es[][o.e.n.Node] version[7.17.5], pid[1688], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 17:07:22 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 17:07:23 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 17:07:23 DEBUG es[][o.e.n.Node] using config [/home/chili/sonarqube-9.7.0.61563/temp/conf/es], data [[/home/chili/sonarqube-9.7.0.61563/data/es7]], logs [/home/chili/sonarqube-9.7.0.61563/logs], plugins [/home/chili/sonarqube-9.7.0.61563/elasticsearch/plugins] 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-7.2.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-util-7.2.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-commons-7.2.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/lang-painless-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-tree-7.2.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-analysis-7.2.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:24 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/parent-join/parent-join-client-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-ssl-config-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-logging-1.1.3.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-4.4.12.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-rest-client-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpasyncclient-4.1.4.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpclient-4.5.10.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/reindex-client-7.17.5.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-nio-4.4.12.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:35 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-codec-1.11.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:36 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-buffer-4.1.66.Final.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.66.Final.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-resolver-4.1.66.Final.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-4.1.66.Final.jar 2022.11.10 17:07:37 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-transport-4.1.66.Final.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-handler-4.1.66.Final.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/transport-netty4-client-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-common-4.1.66.Final.jar 2022.11.10 17:07:38 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 17:07:38 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 17:07:40 DEBUG es[][o.e.e.NodeEnvironment] using node location [[DataPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, indicesPath=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices, fileStore=/ (/dev/sda2), majorDeviceNumber=8, minorDeviceNumber=2}]], local_lock_id [0] 2022.11.10 17:07:40 DEBUG es[][o.e.e.NodeEnvironment] node data locations details: -> /home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, free_space [22.4gb], usable_space [19.8gb], total_space [51gb], mount [/ (/dev/sda2)], type [ext4] 2022.11.10 17:07:40 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 17:07:44 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 17:07:44 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_coordination], size [1], queue size [1k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot_meta], core [1], max [6], keep alive [30s] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [4], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_write], size [1], queue size [1.5k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [listener], size [1], queue size [unbounded] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [1], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_write], size [1], queue size [1k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [1], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [auto_complete], size [1], queue size [100] 2022.11.10 17:07:45 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search] will adjust queue by [50] when determining automatic queue size 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [4], queue size [1k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [1], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [4], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [2], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [2], queue size [1k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_read], size [1], queue size [2k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_read], size [1], queue size [2k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [2], queue size [10k] 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [1], keep alive [5m] 2022.11.10 17:07:45 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search_throttled] will adjust queue by [50] when determining automatic queue size 2022.11.10 17:07:45 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100] 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.l.InternalLoggerFactory] Using Log4J2 as the default logging framework 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe) 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 11 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): unavailable 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 536870912 bytes (maybe) 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /home/chili/sonarqube-9.7.0.61563/temp (java.io.tmpdir) 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model) 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1 2022.11.10 17:07:47 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable at io.netty.util.internal.CleanerJava9.(CleanerJava9.java:68) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:193) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.ConstantPool.(ConstantPool.java:34) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey$1.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpServerTransport.(Netty4HttpServerTransport.java:294) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:45) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) [?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) [?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?] at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:483) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:309) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.5.jar:7.17.5] 2022.11.10 17:07:48 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true 2022.11.10 17:08:37 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [3000], expire [0s] 2022.11.10 17:08:49 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [-1] 2022.11.10 17:08:58 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] 2022.11.10 17:08:59 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s] 2022.11.10 17:08:59 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s] 2022.11.10 17:08:59 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s] 2022.11.10 17:08:59 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s] 2022.11.10 17:08:59 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] 2022.11.10 17:08:59 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2] 2022.11.10 17:09:02 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] 2022.11.10 17:09:02 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [51.1mb] max filter count [10000] 2022.11.10 17:09:02 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [51.1mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] 2022.11.10 17:09:06 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple 2022.11.10 17:09:06 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4 2022.11.10 17:09:06 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 17:09:06 DEBUG es[][o.e.h.n.Netty4HttpServerTransport] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000] 2022.11.10 17:09:06 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 17:09:06 DEBUG es[][o.e.d.SettingsBasedSeedHostsProvider] using initial hosts [127.0.0.1] 2022.11.10 17:09:07 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 17:09:14 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 17:09:16 DEBUG es[][o.e.n.Node] initializing HTTP handlers ... 2022.11.10 17:09:18 INFO es[][o.e.n.Node] initialized 2022.11.10 17:09:18 INFO es[][o.e.n.Node] starting ... 2022.11.10 17:09:18 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 4 2022.11.10 17:09:18 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 2022.11.10 17:09:18 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 2022.11.10 17:09:18 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true 2022.11.10 17:09:18 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512 2022.11.10 17:09:18 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable 2022.11.10 17:09:19 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[2], port[37145], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb] 2022.11.10 17:09:19 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1] 2022.11.10 17:09:20 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1688 (auto-detected) 2022.11.10 17:09:20 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false 2022.11.10 17:09:20 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false 2022.11.10 17:09:20 DEBUG es[][i.n.u.NetUtilInitializations] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo) 2022.11.10 17:09:20 DEBUG es[][i.n.u.NetUtil] /proc/sys/net/core/somaxconn: 4096 2022.11.10 17:09:20 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:a3:2c:79 (auto-detected) 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 4 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true 2022.11.10 17:09:20 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 2022.11.10 17:09:20 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled 2022.11.10 17:09:20 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0 2022.11.10 17:09:20 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384 2022.11.10 17:09:20 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:37145} 2022.11.10 17:09:20 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:37145}, bound_addresses {127.0.0.1:37145} 2022.11.10 17:09:25 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][young][7][13] duration [432ms], collections [1]/[1.2s], total [432ms]/[2.5s], memory [77.5mb]->[39.4mb]/[512mb], all_pools {[young] [42mb]->[0b]/[0b]}{[old] [30.5mb]->[35.4mb]/[512mb]}{[survivor] [6mb]->[4mb]/[0b]} 2022.11.10 17:09:25 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][7] overhead, spent [432ms] collecting in the last [1.2s] 2022.11.10 17:09:29 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [2811ms]; wrote full state with [7] indices 2022.11.10 17:09:29 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 17:09:29 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s] 2022.11.10 17:09:29 INFO es[][o.e.c.c.Coordinator] cluster UUID [pThM6foASlO2PdfSbEivXw] 2022.11.10 17:09:29 DEBUG es[][o.e.t.TransportService] now accepting incoming requests 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 7 (was null, lastKnownLeader was [Optional.empty]) 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=17, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 17:09:29 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s] 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=17, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=619, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=7}, isClosed=false} requesting pre-votes from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=7, lastAcceptedTerm=7, lastAcceptedVersion=105}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=7}, isClosed=false} added PreVoteResponse{currentTerm=7, lastAcceptedTerm=7, lastAcceptedVersion=105} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, starting election 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=8,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] with term 8 2022.11.10 17:09:29 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [7] due to StartJoinRequest{term=8,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=7, optionalJoin=Optional[Join{term=8, lastAcceptedTerm=7, lastAcceptedVersion=105, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=8,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=8, lastAcceptedTerm=7, lastAcceptedVersion=105, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}} from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=7 lastAcceptedVersion=105 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [8] with VoteCollection{votes=[NoJ4WfHARK-DEgu5GKXOeg], joins=[Join{term=8, lastAcceptedTerm=7, lastAcceptedVersion=105, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 8 (was CANDIDATE, lastKnownLeader was [Optional.empty]) 2022.11.10 17:09:30 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 17:09:30 DEBUG es[][o.e.c.s.MasterService] took [106ms] to compute cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 17:09:30 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [106], source [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 17:09:30 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 8, version: 106, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}]} 2022.11.10 17:09:30 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [106] 2022.11.10 17:09:30 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=619, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election 2022.11.10 17:09:31 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [106] with size [5115] 2022.11.10 17:09:31 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [585ms]; wrote full state with [7] indices 2022.11.10 17:09:31 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=106}]: execute 2022.11.10 17:09:31 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [106], source [Publication{term=8, version=106}] 2022.11.10 17:09:31 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}]}, term: 8, version: 106, reason: Publication{term=8, version=106} 2022.11.10 17:09:31 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 17:09:32 DEBUG es[][o.e.c.NodeConnectionsService] connected to {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 106 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 106 2022.11.10 17:09:32 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered 2022.11.10 17:09:32 DEBUG es[][o.e.g.GatewayService] performing state recovery... 2022.11.10 17:09:32 DEBUG es[][o.e.c.l.NodeAndClusterIdStateListener] Received cluster state update. Setting nodeId=[NoJ4WfHARK-DEgu5GKXOeg] and clusterUuid=[pThM6foASlO2PdfSbEivXw] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=106}]: took [438ms] done applying updated cluster state (version: 106, uuid: 39ipEYblTZmHrXI364XHnA) 2022.11.10 17:09:32 DEBUG es[][o.e.c.c.JoinHelper] releasing [1] connections on successful cluster state application 2022.11.10 17:09:32 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=106} 2022.11.10 17:09:32 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=7, optionalJoin=Optional[Join{term=8, lastAcceptedTerm=7, lastAcceptedVersion=105, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] took [3ms] to notify listeners on successful publication of cluster state (version: 106, uuid: 39ipEYblTZmHrXI364XHnA) for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 17:09:32 DEBUG es[][o.e.h.AbstractHttpServerTransport] Bound http to address {127.0.0.1:9001} 2022.11.10 17:09:32 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 17:09:32 INFO es[][o.e.n.Node] started 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] took [263ms] to compute cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] took [106ms] to notify listeners on unchanged cluster state for [cluster_reroute(post-join reroute)] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 17:09:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [local-gateway-elected-state] 2022.11.10 17:09:33 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3] 2022.11.10 17:09:33 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0] 2022.11.10 17:09:33 DEBUG es[][o.e.c.r.a.a.BalancedShardsAllocator] skipping rebalance due to in-flight shard/store fetches 2022.11.10 17:09:33 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1] 2022.11.10 17:09:33 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2] 2022.11.10 17:09:33 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] shard state info found: [primary [true], allocation [[id=EIDR4Z8sRbetiOgv_Gl6eg]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] shard state info found: [primary [true], allocation [[id=Hjj-ITd1RhefzuwGKvoIVg]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] shard state info found: [primary [true], allocation [[id=bVQrg9ZkS5yhnq7qEQFkGg]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] shard state info found: [primary [true], allocation [[id=yFj13EkETzO7NoXsHJA0qQ]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] shard state info found: [primary [true], allocation [[id=GMXXUqyWTgyZHRuUQEBQcg]]] 2022.11.10 17:09:34 DEBUG es[][o.e.c.s.MasterService] took [1.1s] to compute cluster state update for [local-gateway-elected-state] 2022.11.10 17:09:34 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [107], source [local-gateway-elected-state] 2022.11.10 17:09:34 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [107] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] shard state info found: [primary [true], allocation [[id=7bV3f8r_QgiH2TJFI0Tp9Q]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] shard state info found: [primary [true], allocation [[id=iYG4colRQrOYKH5Yc8PSZA]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] shard state info found: [primary [true], allocation [[id=RC24NNOeRQCfrMHX0C34QA]]] 2022.11.10 17:09:34 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] shard state info found: [primary [true], allocation [[id=2HS0XG1gQR6iBC5chKTVgg]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] shard state info found: [primary [true], allocation [[id=kSUj9IavSMSvCn04gk5QwQ]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] shard state info found: [primary [true], allocation [[id=55kP9r7eSjaPkAx2Hput1A]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] shard state info found: [primary [true], allocation [[id=1PnVxJHzRAyB7duGMdX5iQ]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2] 2022.11.10 17:09:35 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [107] with size [5231] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] shard state info found: [primary [true], allocation [[id=cfnefW7VREqduM2gqZw7DA]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] shard state info found: [primary [true], allocation [[id=mX6GCP9-Qw6hq_eiNMarnQ]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] shard state info found: [primary [true], allocation [[id=gpYEHqhWQRebcYya4bLGXw]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] shard state info found: [primary [true], allocation [[id=40lmXE2jRQmia9KK1O9esg]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1] 2022.11.10 17:09:35 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [428ms]; wrote global metadata [false] and metadata for [0] indices and skipped [7] unchanged indices 2022.11.10 17:09:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=107}]: execute 2022.11.10 17:09:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [107], source [Publication{term=8, version=107}] 2022.11.10 17:09:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 107 2022.11.10 17:09:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 107 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] shard state info found: [primary [true], allocation [[id=cemOqkVMTri0ZpaAE400sQ]]] 2022.11.10 17:09:35 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 107 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] shard state info found: [primary [true], allocation [[id=dMKfOEhTTS-1oGo7YS-QXQ]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] shard state info found: [primary [true], allocation [[id=m26EZ-e2ShaXq3vCmyTSxA]]] 2022.11.10 17:09:35 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] shard state info found: [primary [true], allocation [[id=N1eAWJPOSXWHssmbXnJsZg]]] 2022.11.10 17:09:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=107}]: took [203ms] done applying updated cluster state (version: 107, uuid: MPRGKqx7QCG1tK6Rt6f3rw) 2022.11.10 17:09:36 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=107} 2022.11.10 17:09:36 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2022.11.10 17:09:36 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 107, uuid: MPRGKqx7QCG1tK6Rt6f3rw) for [local-gateway-elected-state] 2022.11.10 17:09:36 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] shard state info found: [primary [true], allocation [[id=HP0H87wHSdm_mC_bEh6NLw]]] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3] 2022.11.10 17:09:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[Hjj-ITd1RhefzuwGKvoIVg]] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] shard state info found: [primary [true], allocation [[id=yviWUVOQQHWeSrk4yU72dg]]] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0] 2022.11.10 17:09:36 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] shard state info found: [primary [true], allocation [[id=iDzORoXTRHWBllF7l5Vi-A]]] 2022.11.10 17:09:36 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][2]: allocating [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[bVQrg9ZkS5yhnq7qEQFkGg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: allocating [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][4]: found 1 allocation candidates of [views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[yFj13EkETzO7NoXsHJA0qQ]] 2022.11.10 17:09:37 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true 2022.11.10 17:09:37 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true 2022.11.10 17:09:37 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@1f6a59f3 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][4]: allocating [[views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][3]: found 1 allocation candidates of [views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[GMXXUqyWTgyZHRuUQEBQcg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][3]: allocating [[views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[EIDR4Z8sRbetiOgv_Gl6eg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: throttling allocation [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5e1385f2]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@55d42038]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@278eacf2]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@38447eab]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@19fb06f3]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5d2ea170]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] shard state info found: [primary [true], allocation [[id=bwBI1A3QSsKXe0EE5b0kDw]]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@55a6f552]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1dd4d6b6]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@52fbbd2d]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6ea2034d]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@42fe4d78]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@585da4e9]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ac058a4]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@675be94f]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@c3f1ab3]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5f9c8f7d]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@279d046f]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@54fafbc3]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@78b2030e]] on primary allocation 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4049fa7b]] on primary allocation 2022.11.10 17:09:38 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][18] overhead, spent [295ms] collecting in the last [1.1s] 2022.11.10 17:09:38 DEBUG es[][o.e.c.s.MasterService] took [2.3s] to compute cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:38 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [108], source [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:38 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [108] 2022.11.10 17:09:38 DEBUG es[][i.n.h.c.c.Brotli] brotli4j not in the classpath; Brotli support will be unavailable. 2022.11.10 17:09:38 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [108] with uuid [E87LiHJbTX-W2_57NTPJEQ], diff size [1418] 2022.11.10 17:09:39 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled 2022.11.10 17:09:39 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled 2022.11.10 17:09:39 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled 2022.11.10 17:09:39 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled 2022.11.10 17:09:39 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.delayedQueue.ratio: disabled 2022.11.10 17:09:39 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [239ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:39 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=108}]: execute 2022.11.10 17:09:39 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [108], source [Publication{term=8, version=108}] 2022.11.10 17:09:39 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 108 2022.11.10 17:09:39 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 108 2022.11.10 17:09:39 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[views/Dv2T0qmGRX2UjXF3FDCAMw]] creating index 2022.11.10 17:09:39 DEBUG es[][o.e.i.IndicesService] creating Index [[views/Dv2T0qmGRX2UjXF3FDCAMw]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:43 DEBUG es[][o.e.i.m.MapperService] [[views/Dv2T0qmGRX2UjXF3FDCAMw]] added mapping [view], source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}] 2022.11.10 17:09:43 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][1] creating shard with primary term [5] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1, shard=[views][1]}] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] creating shard_id [views][1] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:43 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][4] creating shard with primary term [5] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4, shard=[views][4]}] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] creating shard_id [views][4] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:43 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][2] creating shard with primary term [5] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2, shard=[views][2]}] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] creating shard_id [views][2] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:43 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:43 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][3] creating shard with primary term [5] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3] 2022.11.10 17:09:43 DEBUG es[][o.e.i.IndexService] [views][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3, shard=[views][3]}] 2022.11.10 17:09:44 DEBUG es[][o.e.i.IndexService] creating shard_id [views][3] 2022.11.10 17:09:44 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:44 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:44 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:44 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 108 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=108}]: took [5s] done applying updated cluster state (version: 108, uuid: E87LiHJbTX-W2_57NTPJEQ) 2022.11.10 17:09:44 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=108} 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 108, uuid: E87LiHJbTX-W2_57NTPJEQ) for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EIDR4Z8sRbetiOgv_Gl6eg]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: throttling allocation [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4341824d]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@61c44f1b]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@59799c]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7bcc0d24]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@51f8578e]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6bd9a44b]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@50c60294]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3fa72b18]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@17068872]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5931acb7]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2a868c13]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@64c9418f]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@75dca8c8]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@23bf0475]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7a65e94b]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@b97629c]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@48a586bf]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ad62f06]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2b47b67e]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@34e83d9]] on primary allocation 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.MasterService] took [336ms] to compute cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:44 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(async_shard_fetch)] 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:44 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AwggODDaTY6LQoqXqv7MAQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=EW3PbiTdS_-RpRXJ5SbRmw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AwggODDaTY6LQoqXqv7MAQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=EW3PbiTdS_-RpRXJ5SbRmw}]}] 2022.11.10 17:09:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Kq7BjiMtSCCrJYG50SWb7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mA9bMTU4S1qbjwHqEfs8uA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Kq7BjiMtSCCrJYG50SWb7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mA9bMTU4S1qbjwHqEfs8uA}]}] 2022.11.10 17:09:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=e8u_CbGkRxKcWgyRk8N7JA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xBiOmM3qTj64lr1Ir3YLuQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=e8u_CbGkRxKcWgyRk8N7JA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xBiOmM3qTj64lr1Ir3YLuQ}]}] 2022.11.10 17:09:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=46dRMQDYRyKc0268f0X1yg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=pj7VEYXzSeq0DpVComlqWA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=46dRMQDYRyKc0268f0X1yg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=pj7VEYXzSeq0DpVComlqWA}]}] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.5s] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.6s] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.2s] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.6s] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:45 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] starting shard [views][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=GMXXUqyWTgyZHRuUQEBQcg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] starting shard [views][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=yFj13EkETzO7NoXsHJA0qQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] starting shard [views][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=bVQrg9ZkS5yhnq7qEQFkGg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:45 DEBUG es[][o.e.c.s.MasterService] took [9ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:45 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [109], source [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:45 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [109] 2022.11.10 17:09:45 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [109] with uuid [l9_2j2xOQr-kURy-720F4Q], diff size [1160] 2022.11.10 17:09:46 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [423ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=109}]: execute 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [109], source [Publication{term=8, version=109}] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 109 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 109 2022.11.10 17:09:46 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:46 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:46 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 109 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=109}]: took [230ms] done applying updated cluster state (version: 109, uuid: l9_2j2xOQr-kURy-720F4Q) 2022.11.10 17:09:46 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=109} 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 109, uuid: l9_2j2xOQr-kURy-720F4Q) for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] starting shard [views][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=Hjj-ITd1RhefzuwGKvoIVg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [110], source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [110] 2022.11.10 17:09:46 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [110] with uuid [gKKk1zXcRJSlXlCNlE-Pbw], diff size [1149] 2022.11.10 17:09:46 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [241ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=110}]: execute 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [110], source [Publication{term=8, version=110}] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 110 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 110 2022.11.10 17:09:46 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 110 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=110}]: took [233ms] done applying updated cluster state (version: 110, uuid: gKKk1zXcRJSlXlCNlE-Pbw) 2022.11.10 17:09:46 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=110} 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 110, uuid: gKKk1zXcRJSlXlCNlE-Pbw) for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[EIDR4Z8sRbetiOgv_Gl6eg]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: allocating [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: allocating [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: allocating [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: allocating [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5e23eaf1]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4732f1a4]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@59314936]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@60b62bc9]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9c6380c]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@22c1932d]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2db1f04c]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@30bbdccb]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7a28fc5a]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@61c55ca6]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4e3d694c]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3e3bd8c0]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3c8e7bb3]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2de75fbf]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1b68f47e]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:46 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@12198b12]] on primary allocation 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] took [272ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [111], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:46 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [111] 2022.11.10 17:09:46 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [111] with uuid [K3gwhUwXRaK_IG4Mzh4rJA], diff size [1560] 2022.11.10 17:09:47 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [481ms]; wrote global metadata [false] and metadata for [3] indices and skipped [4] unchanged indices 2022.11.10 17:09:47 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=111}]: execute 2022.11.10 17:09:47 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [111], source [Publication{term=8, version=111}] 2022.11.10 17:09:47 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 111 2022.11.10 17:09:47 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 111 2022.11.10 17:09:47 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[issues/VZ6DTALkToeQgIcEr8PrtQ]] creating index 2022.11.10 17:09:47 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/VZ6DTALkToeQgIcEr8PrtQ]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:47 DEBUG es[][o.e.i.m.MapperService] [[issues/VZ6DTALkToeQgIcEr8PrtQ]] added mapping [auth] (source suppressed due to length, use TRACE level if needed) 2022.11.10 17:09:47 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/qbsuDVZZRrqTwp7HSmmTgw]] creating index 2022.11.10 17:09:47 DEBUG es[][o.e.i.IndicesService] creating Index [[users/qbsuDVZZRrqTwp7HSmmTgw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:48 DEBUG es[][o.e.i.m.MapperService] [[users/qbsuDVZZRrqTwp7HSmmTgw]] added mapping [user], source [{"user":{"dynamic":"false","properties":{"active":{"type":"boolean"},"email":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true},"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"login":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"name":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"scmAccounts":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"uuid":{"type":"keyword"}}}}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [users][0] creating shard with primary term [5] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [users][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [users][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0, shard=[users][0]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] creating shard_id [users][0] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][1] creating shard with primary term [5] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [issues][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [issues][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1, shard=[issues][1]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][1] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][2] creating shard with primary term [5] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [issues][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [issues][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2, shard=[issues][2]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][2] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qgQQIyaFRP6jzlvhAmpAWw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=veNPXwV0QUCKVfQz-nd6yw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qgQQIyaFRP6jzlvhAmpAWw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=veNPXwV0QUCKVfQz-nd6yw}]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][0] creating shard with primary term [5] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [501ms] 2022.11.10 17:09:48 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:48 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] received shard started for [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [views][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] [views][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0, shard=[views][0]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.IndexService] creating shard_id [views][0] 2022.11.10 17:09:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2ZhINsZ9SQqQcb8bHMIucQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=PBVKxsRTS92ioqP89O3WSA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2ZhINsZ9SQqQcb8bHMIucQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=PBVKxsRTS92ioqP89O3WSA}]}] 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 111 2022.11.10 17:09:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=H8YcgTCURdyk5RQhL4Jp-w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=fs5pnmQPS7usuEnewsT6MQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=H8YcgTCURdyk5RQhL4Jp-w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=fs5pnmQPS7usuEnewsT6MQ}]}] 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=111}]: took [1.5s] done applying updated cluster state (version: 111, uuid: K3gwhUwXRaK_IG4Mzh4rJA) 2022.11.10 17:09:48 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=111} 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 111, uuid: K3gwhUwXRaK_IG4Mzh4rJA) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:48 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] starting shard [users][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=7bV3f8r_QgiH2TJFI0Tp9Q], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [112], source [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:48 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [112] 2022.11.10 17:09:48 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [112] with uuid [Z2sDtacnQ1mzZTgH88Eyvg], diff size [1047] 2022.11.10 17:09:49 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:49 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:49 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=T63vCAXZRlOTlm_6Jc7WgQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=KPWA8KOzQ-qVwxBjJw6Jzg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=T63vCAXZRlOTlm_6Jc7WgQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=KPWA8KOzQ-qVwxBjJw6Jzg}]}] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [827ms] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.1s] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [743ms] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:49 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [433ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=112}]: execute 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [112], source [Publication{term=8, version=112}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 112 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 112 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 112 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=112}]: took [227ms] done applying updated cluster state (version: 112, uuid: Z2sDtacnQ1mzZTgH88Eyvg) 2022.11.10 17:09:49 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=112} 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 112, uuid: Z2sDtacnQ1mzZTgH88Eyvg) for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] starting shard [issues][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=55kP9r7eSjaPkAx2Hput1A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] starting shard [views][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=EIDR4Z8sRbetiOgv_Gl6eg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.948Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] starting shard [issues][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=iYG4colRQrOYKH5Yc8PSZA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [113], source [shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [113] 2022.11.10 17:09:49 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [113] with uuid [WNRI64nATc6JpxZPDbAjLQ], diff size [1369] 2022.11.10 17:09:49 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [400ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=113}]: execute 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [113], source [Publication{term=8, version=113}] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 113 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 113 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:49 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 113 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=113}]: took [0s] done applying updated cluster state (version: 113, uuid: WNRI64nATc6JpxZPDbAjLQ) 2022.11.10 17:09:49 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=113} 2022.11.10 17:09:49 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 113, uuid: WNRI64nATc6JpxZPDbAjLQ) for [shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:50 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: allocating [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: allocating [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: allocating [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: allocating [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4fe58f0]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6c717740]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@597c4383]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@74a9acca]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7dc94747]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7f159873]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1d3422a6]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1515ebc3]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@60948ec]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7b04f6e4]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2eeb5bab]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@19adf780]] on primary allocation 2022.11.10 17:09:50 DEBUG es[][o.e.c.s.MasterService] took [453ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:50 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [114], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:50 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [114] 2022.11.10 17:09:50 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [114] with uuid [q6DQnXk2TBOQHG-8ay-f0g], diff size [1357] 2022.11.10 17:09:51 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [450ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 17:09:51 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=114}]: execute 2022.11.10 17:09:51 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [114], source [Publication{term=8, version=114}] 2022.11.10 17:09:51 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 114 2022.11.10 17:09:51 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 114 2022.11.10 17:09:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[rules/ZPzru4r4QR6_c8p1MyrcNg]] creating index 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/ZPzru4r4QR6_c8p1MyrcNg]], shards [2]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:51 DEBUG es[][o.e.i.m.MapperService] [[rules/ZPzru4r4QR6_c8p1MyrcNg]] added mapping [rule], source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"activeRule_uuid":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":"activeRule"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"owaspTop10-2021":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"tags":{"type":"keyword","norms":true},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][4] creating shard with primary term [5] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4, shard=[issues][4]}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][4] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][3] creating shard with primary term [5] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3, shard=[issues][3]}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][3] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][0] creating shard with primary term [5] 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [issues][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0, shard=[issues][0]}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][0] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:51 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=nDy6d2bXT9KdNZkQv0yzPA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vo-XBYstRhOO9vg6KQmHWQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=nDy6d2bXT9KdNZkQv0yzPA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vo-XBYstRhOO9vg6KQmHWQ}]}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][0] creating shard with primary term [5] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [rules][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0] 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] [rules][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0, shard=[rules][0]}] 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][0] 2022.11.10 17:09:51 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:51 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=oOYhg9IcS1ymuvzl3esPww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=ggglSbepSES53wK8dYCSGg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=oOYhg9IcS1ymuvzl3esPww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=ggglSbepSES53wK8dYCSGg}]}] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [517ms] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=GVPpEHuGRzWynf49awoAYQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=twC3-fCcRry3dRvDncvIQQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=GVPpEHuGRzWynf49awoAYQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=twC3-fCcRry3dRvDncvIQQ}]}] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [681ms] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [686ms] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 114 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=114}]: took [1.1s] done applying updated cluster state (version: 114, uuid: q6DQnXk2TBOQHG-8ay-f0g) 2022.11.10 17:09:52 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=114} 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 114, uuid: q6DQnXk2TBOQHG-8ay-f0g) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] starting shard [issues][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=2HS0XG1gQR6iBC5chKTVgg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] starting shard [issues][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=RC24NNOeRQCfrMHX0C34QA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] took [7ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [115], source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [115] 2022.11.10 17:09:52 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [115] with uuid [uNsWj_utQ_iRN9-uHU3Ccg], diff size [1172] 2022.11.10 17:09:52 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=8, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=256, minTranslogGeneration=8, trimmedAboveSeqNo=-2} 2022.11.10 17:09:52 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=8, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=256, minTranslogGeneration=8, trimmedAboveSeqNo=-2} 2022.11.10 17:09:52 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=115}]: execute 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [115], source [Publication{term=8, version=115}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 115 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 115 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:52 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 115 2022.11.10 17:09:52 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=256, max_seq_no=256, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=249, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}], last commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=256, max_seq_no=256, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=249, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=115}]: took [201ms] done applying updated cluster state (version: 115, uuid: uNsWj_utQ_iRN9-uHU3Ccg) 2022.11.10 17:09:52 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=115} 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 115, uuid: uNsWj_utQ_iRN9-uHU3Ccg) for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] starting shard [issues][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=kSUj9IavSMSvCn04gk5QwQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.960Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] took [5ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [116], source [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:52 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [116] 2022.11.10 17:09:52 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [116] with uuid [aGqkujgkQ1CzrhQoMZD21g], diff size [1145] 2022.11.10 17:09:53 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [639ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=116}]: execute 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [116], source [Publication{term=8, version=116}] 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 116 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 116 2022.11.10 17:09:53 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 116 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=116}]: took [0s] done applying updated cluster state (version: 116, uuid: aGqkujgkQ1CzrhQoMZD21g) 2022.11.10 17:09:53 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=116} 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 116, uuid: aGqkujgkQ1CzrhQoMZD21g) for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: allocating [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: allocating [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: allocating [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@62c8605e]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2d405bf4]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6c59d523]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4f09378]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7ffb6b38]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@53ec8c19]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@603f80c6]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@399f9eb4]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2d052016]] on primary allocation 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.MasterService] took [130ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [117], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:53 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [117] 2022.11.10 17:09:53 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [117] with uuid [K735QjxhToGfgmPtUNH3hw], diff size [1315] 2022.11.10 17:09:54 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][214] took [145.9ms] 2022.11.10 17:09:54 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:54 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.5s] 2022.11.10 17:09:54 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:54 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:54 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [414ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 17:09:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=117}]: execute 2022.11.10 17:09:54 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [117], source [Publication{term=8, version=117}] 2022.11.10 17:09:54 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 117 2022.11.10 17:09:54 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 117 2022.11.10 17:09:54 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]] creating index 2022.11.10 17:09:54 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:55 DEBUG es[][o.e.i.m.MapperService] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"qualifier":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 17:09:55 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][1] creating shard with primary term [5] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [rules][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [rules][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1, shard=[rules][1]}] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][1] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:55 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][3] creating shard with primary term [5] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3, shard=[projectmeasures][3]}] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][3] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:55 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][0] creating shard with primary term [5] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0, shard=[projectmeasures][0]}] 2022.11.10 17:09:55 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][0] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:55 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 117 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=117}]: took [1.1s] done applying updated cluster state (version: 117, uuid: K735QjxhToGfgmPtUNH3hw) 2022.11.10 17:09:55 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=117} 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.MasterService] took [38ms] to notify listeners on successful publication of cluster state (version: 117, uuid: K735QjxhToGfgmPtUNH3hw) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] starting shard [rules][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=40lmXE2jRQmia9KK1O9esg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.MasterService] took [11ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [118], source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:55 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [118] 2022.11.10 17:09:55 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:55 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:55 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [118] with uuid [ROAe0Zu1REadtIRhRWzQ2A], diff size [1093] 2022.11.10 17:09:55 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=8, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=263, minTranslogGeneration=8, trimmedAboveSeqNo=-2} 2022.11.10 17:09:55 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=8, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=263, minTranslogGeneration=8, trimmedAboveSeqNo=-2} 2022.11.10 17:09:56 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:56 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:56 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UD1dSifsQ7ixBh1zBW2_HQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=dIfuYmiuQY6LNNdVHtxR-g}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UD1dSifsQ7ixBh1zBW2_HQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=dIfuYmiuQY6LNNdVHtxR-g}]}] 2022.11.10 17:09:56 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fkH2jczrQG-tN4LnAUwQrw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=o4RdVT1aQZWOdjVmvaC52Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fkH2jczrQG-tN4LnAUwQrw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=o4RdVT1aQZWOdjVmvaC52Q}]}] 2022.11.10 17:09:56 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=263, max_seq_no=263, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=256, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}], last commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=263, max_seq_no=263, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=256, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}] 2022.11.10 17:09:56 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [670ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=118}]: execute 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [118], source [Publication{term=8, version=118}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 118 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 118 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.3s] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.5s] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 118 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=118}]: took [201ms] done applying updated cluster state (version: 118, uuid: ROAe0Zu1REadtIRhRWzQ2A) 2022.11.10 17:09:56 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=118} 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 118, uuid: ROAe0Zu1REadtIRhRWzQ2A) for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] starting shard [projectmeasures][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=1PnVxJHzRAyB7duGMdX5iQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] starting shard [projectmeasures][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=dMKfOEhTTS-1oGo7YS-QXQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [119], source [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [119] 2022.11.10 17:09:56 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [119] with uuid [3eFxYfFjTd6Xb_Rx35Q0Iw], diff size [1113] 2022.11.10 17:09:56 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:56 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][222] took [3.4ms] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=119}]: execute 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [119], source [Publication{term=8, version=119}] 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 119 2022.11.10 17:09:56 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 119 2022.11.10 17:09:56 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:57 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.8s] 2022.11.10 17:09:57 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:09:57 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:57 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 119 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=119}]: took [204ms] done applying updated cluster state (version: 119, uuid: 3eFxYfFjTd6Xb_Rx35Q0Iw) 2022.11.10 17:09:57 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=119} 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 119, uuid: 3eFxYfFjTd6Xb_Rx35Q0Iw) for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] starting shard [rules][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=mX6GCP9-Qw6hq_eiNMarnQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.962Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]) 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [120], source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [120] 2022.11.10 17:09:57 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [120] with uuid [JDeTAcpPQgm8UUNkvXZMvQ], diff size [1070] 2022.11.10 17:09:57 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=120}]: execute 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [120], source [Publication{term=8, version=120}] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 120 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 120 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 120 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=120}]: took [200ms] done applying updated cluster state (version: 120, uuid: JDeTAcpPQgm8UUNkvXZMvQ) 2022.11.10 17:09:57 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=120} 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 120, uuid: JDeTAcpPQgm8UUNkvXZMvQ) for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: allocating [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: allocating [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: allocating [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: allocating [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@179abfc3]] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@73d2d197]] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@770c23ce]] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@74d0538b]] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:09:57 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@ac9ac8]] on primary allocation 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] took [133ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [121], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [121] 2022.11.10 17:09:57 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [121] with uuid [nCooziGfSo6Zes7KbJZCyw], diff size [1376] 2022.11.10 17:09:57 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [200ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=121}]: execute 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [121], source [Publication{term=8, version=121}] 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 121 2022.11.10 17:09:57 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 121 2022.11.10 17:09:57 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[components/z8jTFy28Rq2m0AWGxQGyuw]] creating index 2022.11.10 17:09:57 DEBUG es[][o.e.i.IndicesService] creating Index [[components/z8jTFy28Rq2m0AWGxQGyuw]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 17:09:57 DEBUG es[][o.e.i.m.MapperService] [[components/z8jTFy28Rq2m0AWGxQGyuw]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"name":{"type":"text","store":true,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"term_vector":"with_positions_offsets","norms":false,"fielddata":true},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 17:09:57 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][3] creating shard with primary term [5] 2022.11.10 17:09:57 DEBUG es[][o.e.i.IndexService] [components][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3] 2022.11.10 17:09:57 DEBUG es[][o.e.i.IndexService] [components][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3, shard=[components][3]}] 2022.11.10 17:09:57 DEBUG es[][o.e.i.IndexService] creating shard_id [components][3] 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:57 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:58 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][1] creating shard with primary term [5] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1, shard=[projectmeasures][1]}] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][1] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:58 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][4] creating shard with primary term [5] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4, shard=[projectmeasures][4]}] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][4] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:58 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:58 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:58 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:58 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:58 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][2] creating shard with primary term [5] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2, shard=[projectmeasures][2]}] 2022.11.10 17:09:58 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][2] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:09:58 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 121 2022.11.10 17:09:58 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:09:58 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=aZ5QMMNLR3OjjtT9WV1V9w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=khYidmzQSlKnXvnRph5YPQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=aZ5QMMNLR3OjjtT9WV1V9w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=khYidmzQSlKnXvnRph5YPQ}]}] 2022.11.10 17:09:58 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=121}]: took [1.4s] done applying updated cluster state (version: 121, uuid: nCooziGfSo6Zes7KbJZCyw) 2022.11.10 17:09:58 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=121} 2022.11.10 17:09:58 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 121, uuid: nCooziGfSo6Zes7KbJZCyw) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:59 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=65wTtjk_QG63BHUANNPapQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Zy5kvxUHQ7WAzcO6KJeKwA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=65wTtjk_QG63BHUANNPapQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Zy5kvxUHQ7WAzcO6KJeKwA}]}] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.2s] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] starting shard [components][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=yviWUVOQQHWeSrk4yU72dg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:09:59 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:59 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:59 DEBUG es[][o.e.c.s.MasterService] took [38ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:59 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [122], source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:09:59 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [122] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [943ms] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:59 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:09:59 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [122] with uuid [HLPUJJb1Q7SepySE4z6OLg], diff size [1092] 2022.11.10 17:09:59 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JUloPkDpSYS5ZP_rreqL4w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=oyC_YB9-RU-LgnK3o6Upow}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JUloPkDpSYS5ZP_rreqL4w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=oyC_YB9-RU-LgnK3o6Upow}]}] 2022.11.10 17:09:59 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AyFJKMzIRDe5VKc_Ay_L4g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Kl9n0o9aRIW-KMGoK21NQw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AyFJKMzIRDe5VKc_Ay_L4g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Kl9n0o9aRIW-KMGoK21NQw}]}] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [973ms] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:09:59 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.3s] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:09:59 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][39] overhead, spent [187ms] collecting in the last [1s] 2022.11.10 17:10:00 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [809ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=122}]: execute 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [122], source [Publication{term=8, version=122}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 122 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 122 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:00 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 122 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=122}]: took [0s] done applying updated cluster state (version: 122, uuid: HLPUJJb1Q7SepySE4z6OLg) 2022.11.10 17:10:00 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=122} 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] took [3ms] to notify listeners on successful publication of cluster state (version: 122, uuid: HLPUJJb1Q7SepySE4z6OLg) for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] starting shard [projectmeasures][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=gpYEHqhWQRebcYya4bLGXw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] starting shard [projectmeasures][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=cfnefW7VREqduM2gqZw7DA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:00 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] starting shard [projectmeasures][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=cemOqkVMTri0ZpaAE400sQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.958Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] took [12ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [123], source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [123] 2022.11.10 17:10:00 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [123] with uuid [onx9JHnoQ2iS3WzAcT1-zw], diff size [1152] 2022.11.10 17:10:00 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [412ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=123}]: execute 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [123], source [Publication{term=8, version=123}] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 123 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 123 2022.11.10 17:10:00 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:00 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:00 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 123 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=123}]: took [0s] done applying updated cluster state (version: 123, uuid: onx9JHnoQ2iS3WzAcT1-zw) 2022.11.10 17:10:00 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=123} 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 123, uuid: onx9JHnoQ2iS3WzAcT1-zw) for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: allocating [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: allocating [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: allocating [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: allocating [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:10:00 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3d24f3be]] on primary allocation 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] took [123ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [124], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:00 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [124] 2022.11.10 17:10:00 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [124] with uuid [ZRHnzUsDRmyMRI3RbiOfSg], diff size [1182] 2022.11.10 17:10:01 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [293ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=124}]: execute 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [124], source [Publication{term=8, version=124}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 124 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 124 2022.11.10 17:10:01 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][1] creating shard with primary term [5] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1, shard=[components][1]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] creating shard_id [components][1] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][4] creating shard with primary term [5] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4, shard=[components][4]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] creating shard_id [components][4] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][2] creating shard with primary term [5] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2, shard=[components][2]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] creating shard_id [components][2] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][0] creating shard with primary term [5] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] [components][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0, shard=[components][0]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.IndexService] creating shard_id [components][0] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:10:01 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=dlOCmUqfSq2uMWhUvrPJFQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=4Pe15kvVTBKSWFZrw1JOvg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=dlOCmUqfSq2uMWhUvrPJFQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=4Pe15kvVTBKSWFZrw1JOvg}]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [305ms] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 124 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=124}]: took [204ms] done applying updated cluster state (version: 124, uuid: ZRHnzUsDRmyMRI3RbiOfSg) 2022.11.10 17:10:01 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=124} 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:10:01 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hYweKZ-rRzuAmXC2I7lT8Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vn_3dqi7ScCkAWZ94yMXRg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hYweKZ-rRzuAmXC2I7lT8Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vn_3dqi7ScCkAWZ94yMXRg}]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 124, uuid: ZRHnzUsDRmyMRI3RbiOfSg) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:01 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Zb6npGWJRLmZx5WdmvT2Ag, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q6IFvbUKQxqdcluDGGemcg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Zb6npGWJRLmZx5WdmvT2Ag, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q6IFvbUKQxqdcluDGGemcg}]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [346ms] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [462ms] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=5, trimmedAboveSeqNo=-2} 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] starting shard [components][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=N1eAWJPOSXWHssmbXnJsZg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.MasterService] took [78ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:01 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AnRshew2R76U-976VFG9Pg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=85vWB2uCS8u_lINjv7DAjA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AnRshew2R76U-976VFG9Pg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=85vWB2uCS8u_lINjv7DAjA}]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [125], source [shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:01 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [125] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:10:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [483ms] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [125] with uuid [mo9dFuvXR0SR64ze1OF5Vg], diff size [1181] 2022.11.10 17:10:02 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [200ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=125}]: execute 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [125], source [Publication{term=8, version=125}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 125 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 125 2022.11.10 17:10:02 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 125 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=125}]: took [226ms] done applying updated cluster state (version: 125, uuid: mo9dFuvXR0SR64ze1OF5Vg) 2022.11.10 17:10:02 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=125} 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 125, uuid: mo9dFuvXR0SR64ze1OF5Vg) for [shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] starting shard [components][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=m26EZ-e2ShaXq3vCmyTSxA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] starting shard [components][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=HP0H87wHSdm_mC_bEh6NLw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] starting shard [components][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=iDzORoXTRHWBllF7l5Vi-A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.MasterService] took [19ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [126], source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:02 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [126] 2022.11.10 17:10:02 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [126] with uuid [Gwqe0ep9RDGN5jJnJmvEhw], diff size [1150] 2022.11.10 17:10:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [665ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=126}]: execute 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [126], source [Publication{term=8, version=126}] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 126 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 126 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 126 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=126}]: took [0s] done applying updated cluster state (version: 126, uuid: Gwqe0ep9RDGN5jJnJmvEhw) 2022.11.10 17:10:03 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=126} 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 126, uuid: Gwqe0ep9RDGN5jJnJmvEhw) for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:03 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 17:10:03 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: allocating [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{LOOqDTTpTXON9F5vEYAj-A}{127.0.0.1}{127.0.0.1:37145}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] took [63ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [127], source [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [127] 2022.11.10 17:10:03 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [127] with uuid [cJQWNaSNS_SZALfPtDSp0A], diff size [1076] 2022.11.10 17:10:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [219ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=127}]: execute 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [127], source [Publication{term=8, version=127}] 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 127 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 127 2022.11.10 17:10:03 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]] creating index 2022.11.10 17:10:03 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 17:10:03 DEBUG es[][o.e.i.m.MapperService] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]] added mapping [metadata], source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}] 2022.11.10 17:10:03 DEBUG es[][o.e.i.c.IndicesClusterStateService] [metadatas][0] creating shard with primary term [5] 2022.11.10 17:10:03 DEBUG es[][o.e.i.IndexService] [metadatas][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0] 2022.11.10 17:10:03 DEBUG es[][o.e.i.IndexService] [metadatas][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0, shard=[metadatas][0]}] 2022.11.10 17:10:03 DEBUG es[][o.e.i.IndexService] creating shard_id [metadatas][0] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 17:10:03 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 127 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=127}]: took [248ms] done applying updated cluster state (version: 127, uuid: cJQWNaSNS_SZALfPtDSp0A) 2022.11.10 17:10:03 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=127} 2022.11.10 17:10:03 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 127, uuid: cJQWNaSNS_SZALfPtDSp0A) for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:04 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=16, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 17:10:04 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=16, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 17:10:04 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{es_version=7.17.5, history_uuid=_1n2s2ucRj6ZqrUkmzvO8Q, local_checkpoint=16, max_seq_no=16, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=0, translog_uuid=KUxcsAtZSbesfWRcAgdGWA}]}], last commit [CommitPoint{segment[segments_3], userData[{es_version=7.17.5, history_uuid=_1n2s2ucRj6ZqrUkmzvO8Q, local_checkpoint=16, max_seq_no=16, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=0, translog_uuid=KUxcsAtZSbesfWRcAgdGWA}]}] 2022.11.10 17:10:05 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 17:10:05 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.1s] 2022.11.10 17:10:05 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:05 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] received shard started for [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:05 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] starting shard [metadatas][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=bwBI1A3QSsKXe0EE5b0kDw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T16:09:32.963Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 17:10:05 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [128], source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [128] 2022.11.10 17:10:05 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [128] with uuid [13btzWfZQAWV6IIk-YuQdw], diff size [1056] 2022.11.10 17:10:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [433ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=128}]: execute 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [128], source [Publication{term=8, version=128}] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 128 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 128 2022.11.10 17:10:05 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 128 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=8, version=128}]: took [0s] done applying updated cluster state (version: 128, uuid: 13btzWfZQAWV6IIk-YuQdw) 2022.11.10 17:10:05 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=8, version=128} 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 128, uuid: 13btzWfZQAWV6IIk-YuQdw) for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:05 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 17:10:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:10:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:10:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:10:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:10:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:10:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:10:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:10:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:10:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:10:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:10:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:10:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:10:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:10:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:10:33 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:10:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:10:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:10:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:10:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:10:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:10:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:10:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:10:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:10:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:10:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:10:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:10:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:10:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:10:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:10:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:10:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:10:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:11:03 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:11:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:11:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:11:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:11:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:17 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:11:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:11:18 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 17:11:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:11:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:11:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:11:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:11:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:11:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:11:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:11:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:11:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:11:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:11:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:11:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:11:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:11:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:11:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:11:47 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:11:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:11:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:11:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:11:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:11:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:11:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:11:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:11:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:11:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:11:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:11:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:12:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:12:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:14 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:12:14 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:14 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:14 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:12:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:12:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:12:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:12:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:12:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:12:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:12:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:12:25 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:12:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:12:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:12:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:12:52 WARN es[][o.e.t.ThreadPool] timer thread slept for [11.3s/11316ms] on absolute clock which is above the warn threshold of [5000ms] 2022.11.10 17:12:54 WARN es[][o.e.t.ThreadPool] timer thread slept for [11.3s/11316451545ns] on relative clock which is above the warn threshold of [5000ms] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:12:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:12:56 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:12:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:12:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:12:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:13:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:13:18 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:13:26 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:13:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:13:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:13:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:13:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:13:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:13:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:13:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:13:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:14:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:14:15 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][young][259][17] duration [2.8s], collections [1]/[1s], total [2.8s]/[5.9s], memory [92.9mb]->[92.9mb]/[512mb], all_pools {[young] [33mb]->[0b]/[0b]}{[old] [53.9mb]->[57mb]/[512mb]}{[survivor] [6mb]->[4mb]/[0b]} 2022.11.10 17:14:15 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][259] overhead, spent [2.8s] collecting in the last [1s] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:14:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:14:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:14:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:14:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:14:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:14:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:14:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:14:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=bVQrg9ZkS5yhnq7qEQFkGg] on inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=Hjj-ITd1RhefzuwGKvoIVg] on inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=GMXXUqyWTgyZHRuUQEBQcg] on inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=yFj13EkETzO7NoXsHJA0qQ] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=iYG4colRQrOYKH5Yc8PSZA] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=55kP9r7eSjaPkAx2Hput1A] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=2HS0XG1gQR6iBC5chKTVgg] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=kSUj9IavSMSvCn04gk5QwQ] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=EIDR4Z8sRbetiOgv_Gl6eg] on inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [users][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=7bV3f8r_QgiH2TJFI0Tp9Q] on inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [rules][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=40lmXE2jRQmia9KK1O9esg] on inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [rules][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=mX6GCP9-Qw6hq_eiNMarnQ] on inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=RC24NNOeRQCfrMHX0C34QA] on inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=1PnVxJHzRAyB7duGMdX5iQ] on inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:14:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=dMKfOEhTTS-1oGo7YS-QXQ] on inactive 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:14:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:14:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:15:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=iDzORoXTRHWBllF7l5Vi-A] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=N1eAWJPOSXWHssmbXnJsZg] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=HP0H87wHSdm_mC_bEh6NLw] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=yviWUVOQQHWeSrk4yU72dg] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=m26EZ-e2ShaXq3vCmyTSxA] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=gpYEHqhWQRebcYya4bLGXw] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=cfnefW7VREqduM2gqZw7DA] on inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:01 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=cemOqkVMTri0ZpaAE400sQ] on inactive 2022.11.10 17:15:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:15:06 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 17:15:06 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [metadatas][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=bwBI1A3QSsKXe0EE5b0kDw] on inactive 2022.11.10 17:15:19 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:15:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:15:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:15:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:15:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:15:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:15:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:16:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:16:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:16:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:16:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:16:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:16:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:16:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:16:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:16:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:17:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:17:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:17:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:17:19 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:17:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:17:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:17:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:17:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:17:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:17:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:17:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:18:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:18:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:18:04 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:18:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:18:29 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][young][501][18] duration [780ms], collections [1]/[1.4s], total [780ms]/[6.7s], memory [81mb]->[61.2mb]/[512mb], all_pools {[young] [20mb]->[0b]/[0b]}{[old] [57mb]->[59.2mb]/[512mb]}{[survivor] [4mb]->[2mb]/[0b]} 2022.11.10 17:18:29 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][501] overhead, spent [780ms] collecting in the last [1.4s] 2022.11.10 17:18:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:18:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:18:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:18:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:18:34 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=257, timestamp=1668087711431, source='peer recovery'}}}] 2022.11.10 17:18:57 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=4, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=264, timestamp=1668087711655, source='peer recovery'}}}] 2022.11.10 17:18:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:18:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:18:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:19:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:19:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:19:05 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:19:06 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][222] took [4.3ms] 2022.11.10 17:19:07 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][214] took [1.6ms] 2022.11.10 17:19:19 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 17:19:27 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 17:19:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:19:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 17:19:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:19:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 17:19:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 17:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 17:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 17:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 17:19:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 17:19:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 17:19:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 17:19:35 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 17:19:35 INFO es[][o.e.n.Node] stopping ... 2022.11.10 17:19:35 DEBUG es[][o.e.i.IndicesService] [components] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [issues] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [issues/VZ6DTALkToeQgIcEr8PrtQ] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:35 DEBUG es[][o.e.i.IndicesService] [components/z8jTFy28Rq2m0AWGxQGyuw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [views] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [metadatas] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [metadatas/N-wJ8qPTTTyTYoXbXg3F8g] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [views/Dv2T0qmGRX2UjXF3FDCAMw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [projectmeasures] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [projectmeasures/SWp38y_dTeW_i3HApsNVsQ] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [metadatas/N-wJ8qPTTTyTYoXbXg3F8g] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [users] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [users/qbsuDVZZRrqTwp7HSmmTgw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [users/qbsuDVZZRrqTwp7HSmmTgw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [rules] closing ... (reason [SHUTDOWN]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [rules/ZPzru4r4QR6_c8p1MyrcNg] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndicesService] [views/Dv2T0qmGRX2UjXF3FDCAMw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:36 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:36 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:36 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:36 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:37 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndicesService] [projectmeasures/SWp38y_dTeW_i3HApsNVsQ] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndicesService] [issues/VZ6DTALkToeQgIcEr8PrtQ] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndicesService] [components/z8jTFy28Rq2m0AWGxQGyuw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_6], userData[{local_checkpoint=262, max_unsafe_auto_id_timestamp=-1, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ, min_retained_seq_no=257, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, es_version=7.17.5, max_seq_no=262}]}], last commit [CommitPoint{segment[segments_6], userData[{local_checkpoint=262, max_unsafe_auto_id_timestamp=-1, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ, min_retained_seq_no=257, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, es_version=7.17.5, max_seq_no=262}]}] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=256, max_seq_no=256, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=249, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:37 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_6], userData[{local_checkpoint=269, max_unsafe_auto_id_timestamp=-1, translog_uuid=A-DhRTu6R06UjeegIwa7MA, min_retained_seq_no=264, history_uuid=ay94HNIbT4Cqi04CJWiRAA, es_version=7.17.5, max_seq_no=269}]}], last commit [CommitPoint{segment[segments_6], userData[{local_checkpoint=269, max_unsafe_auto_id_timestamp=-1, translog_uuid=A-DhRTu6R06UjeegIwa7MA, min_retained_seq_no=264, history_uuid=ay94HNIbT4Cqi04CJWiRAA, es_version=7.17.5, max_seq_no=269}]}] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_5], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=263, max_seq_no=263, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=256, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}] 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 17:19:37 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 17:19:37 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 17:19:37 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 17:19:37 DEBUG es[][o.e.i.IndicesService] [rules/ZPzru4r4QR6_c8p1MyrcNg] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 17:19:37 INFO es[][o.e.n.Node] stopped 2022.11.10 17:19:37 INFO es[][o.e.n.Node] closing ... 2022.11.10 17:19:37 INFO es[][o.e.n.Node] closed 2022.11.10 18:09:58 DEBUG es[][o.e.b.SystemCallFilter] Linux seccomp filter installation successful, threads: [all] 2022.11.10 18:10:02 DEBUG es[][o.e.j.JarHell] java.class.path: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:02 DEBUG es[][o.e.j.JarHell] sun.boot.class.path: null 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:05 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:06 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:08 DEBUG es[][o.e.c.n.IfConfig] configuration: lo inet 127.0.0.1 netmask:255.0.0.0 scope:host inet6 ::1 prefixlen:128 scope:host UP LOOPBACK mtu:65536 index:1 eth0 inet 192.168.161.171 netmask:255.255.255.0 broadcast:192.168.161.255 scope:site hardware 00:50:56:A3:2C:79 UP MULTICAST mtu:1500 index:2 2022.11.10 18:10:14 INFO es[][o.e.n.Node] version[7.17.5], pid[1694], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 18:10:14 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 18:10:14 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 18:10:14 DEBUG es[][o.e.n.Node] using config [/home/chili/sonarqube-9.7.0.61563/temp/conf/es], data [[/home/chili/sonarqube-9.7.0.61563/data/es7]], logs [/home/chili/sonarqube-9.7.0.61563/logs], plugins [/home/chili/sonarqube-9.7.0.61563/elasticsearch/plugins] 2022.11.10 18:10:15 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-7.2.jar 2022.11.10 18:10:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-util-7.2.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-commons-7.2.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/lang-painless-7.17.5.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-tree-7.2.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-analysis-7.2.jar 2022.11.10 18:10:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:28 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/parent-join/parent-join-client-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:29 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-ssl-config-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-logging-1.1.3.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-4.4.12.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-rest-client-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpasyncclient-4.1.4.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpclient-4.5.10.jar 2022.11.10 18:10:30 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/reindex-client-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-nio-4.4.12.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-codec-1.11.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:31 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-buffer-4.1.66.Final.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.66.Final.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-resolver-4.1.66.Final.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-4.1.66.Final.jar 2022.11.10 18:10:33 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-transport-4.1.66.Final.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-handler-4.1.66.Final.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/transport-netty4-client-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-common-4.1.66.Final.jar 2022.11.10 18:10:34 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 18:10:35 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 18:10:37 DEBUG es[][o.e.e.NodeEnvironment] using node location [[DataPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, indicesPath=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices, fileStore=/ (/dev/sda2), majorDeviceNumber=8, minorDeviceNumber=2}]], local_lock_id [0] 2022.11.10 18:10:37 DEBUG es[][o.e.e.NodeEnvironment] node data locations details: -> /home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, free_space [22.4gb], usable_space [19.7gb], total_space [51gb], mount [/ (/dev/sda2)], type [ext4] 2022.11.10 18:10:37 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 18:10:41 INFO es[][o.e.n.Node] node name [sonarqube], node ID [NoJ4WfHARK-DEgu5GKXOeg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_coordination], size [1], queue size [1k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot_meta], core [1], max [6], keep alive [30s] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [4], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_write], size [1], queue size [1.5k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [listener], size [1], queue size [unbounded] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [1], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_write], size [1], queue size [1k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [1], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [auto_complete], size [1], queue size [100] 2022.11.10 18:10:41 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search] will adjust queue by [50] when determining automatic queue size 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [4], queue size [1k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [1], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [4], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [2], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [2], queue size [1k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_read], size [1], queue size [2k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_read], size [1], queue size [2k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [2], queue size [10k] 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [1], keep alive [5m] 2022.11.10 18:10:41 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search_throttled] will adjust queue by [50] when determining automatic queue size 2022.11.10 18:10:41 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100] 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.l.InternalLoggerFactory] Using Log4J2 as the default logging framework 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe) 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 11 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): unavailable 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 536870912 bytes (maybe) 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /home/chili/sonarqube-9.7.0.61563/temp (java.io.tmpdir) 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model) 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1 2022.11.10 18:10:44 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable at io.netty.util.internal.CleanerJava9.(CleanerJava9.java:68) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:193) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.ConstantPool.(ConstantPool.java:34) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey$1.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpServerTransport.(Netty4HttpServerTransport.java:294) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:45) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) [?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) [?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?] at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:483) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:309) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.5.jar:7.17.5] 2022.11.10 18:10:45 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true 2022.11.10 18:11:28 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [3000], expire [0s] 2022.11.10 18:11:38 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [-1] 2022.11.10 18:11:49 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] 2022.11.10 18:11:50 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s] 2022.11.10 18:11:50 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s] 2022.11.10 18:11:50 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s] 2022.11.10 18:11:50 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s] 2022.11.10 18:11:50 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] 2022.11.10 18:11:50 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2] 2022.11.10 18:11:51 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] 2022.11.10 18:11:51 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [51.1mb] max filter count [10000] 2022.11.10 18:11:51 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [51.1mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] 2022.11.10 18:11:56 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple 2022.11.10 18:11:56 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4 2022.11.10 18:11:57 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 18:11:57 DEBUG es[][o.e.h.n.Netty4HttpServerTransport] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000] 2022.11.10 18:11:57 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 18:11:57 DEBUG es[][o.e.d.SettingsBasedSeedHostsProvider] using initial hosts [127.0.0.1] 2022.11.10 18:11:58 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 18:12:03 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 18:12:07 DEBUG es[][o.e.n.Node] initializing HTTP handlers ... 2022.11.10 18:12:08 INFO es[][o.e.n.Node] initialized 2022.11.10 18:12:08 INFO es[][o.e.n.Node] starting ... 2022.11.10 18:12:08 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 4 2022.11.10 18:12:08 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 2022.11.10 18:12:08 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 2022.11.10 18:12:08 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true 2022.11.10 18:12:08 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512 2022.11.10 18:12:08 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable 2022.11.10 18:12:09 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[2], port[34357], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb] 2022.11.10 18:12:09 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1] 2022.11.10 18:12:09 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1694 (auto-detected) 2022.11.10 18:12:09 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false 2022.11.10 18:12:09 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false 2022.11.10 18:12:09 DEBUG es[][i.n.u.NetUtilInitializations] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo) 2022.11.10 18:12:09 DEBUG es[][i.n.u.NetUtil] /proc/sys/net/core/somaxconn: 4096 2022.11.10 18:12:09 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:a3:2c:79 (auto-detected) 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 4 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true 2022.11.10 18:12:09 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 2022.11.10 18:12:09 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled 2022.11.10 18:12:09 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0 2022.11.10 18:12:09 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384 2022.11.10 18:12:11 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:34357} 2022.11.10 18:12:11 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:34357}, bound_addresses {127.0.0.1:34357} 2022.11.10 18:12:18 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1798ms]; wrote full state with [7] indices 2022.11.10 18:12:18 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 18:12:18 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s] 2022.11.10 18:12:18 INFO es[][o.e.c.c.Coordinator] cluster UUID [pThM6foASlO2PdfSbEivXw] 2022.11.10 18:12:19 DEBUG es[][o.e.t.TransportService] now accepting incoming requests 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 8 (was null, lastKnownLeader was [Optional.empty]) 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=86, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:12:19 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s] 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=86, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=573, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=8}, isClosed=false} requesting pre-votes from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=8, lastAcceptedTerm=8, lastAcceptedVersion=128}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=8}, isClosed=false} added PreVoteResponse{currentTerm=8, lastAcceptedTerm=8, lastAcceptedVersion=128} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, starting election 2022.11.10 18:12:19 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=9,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] with term 9 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [8] due to StartJoinRequest{term=9,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=573, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=2, maxDelayMillis=300, delayMillis=516, ElectionScheduler{attempt=3, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=9}, isClosed=false} requesting pre-votes from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=9, lastAcceptedTerm=8, lastAcceptedVersion=128}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=9}, isClosed=false} added PreVoteResponse{currentTerm=9, lastAcceptedTerm=8, lastAcceptedVersion=128} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, starting election 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=10,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=8, optionalJoin=Optional[Join{term=9, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] with term 10 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [9] due to StartJoinRequest{term=10,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=9,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=9, optionalJoin=Optional[Join{term=10, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:12:20 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=10,node={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=2, maxDelayMillis=300, delayMillis=516, ElectionScheduler{attempt=3, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=3, maxDelayMillis=400, delayMillis=642, ElectionScheduler{attempt=4, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=10}, isClosed=false} requesting pre-votes from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=10, lastAcceptedTerm=8, lastAcceptedVersion=128}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=10}, isClosed=false} added PreVoteResponse{currentTerm=10, lastAcceptedTerm=8, lastAcceptedVersion=128} from {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, starting election 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=10, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} from [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=8 lastAcceptedVersion=128 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [10] with VoteCollection{votes=[NoJ4WfHARK-DEgu5GKXOeg], joins=[Join{term=10, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 10 (was CANDIDATE, lastKnownLeader was [Optional.empty]) 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: ignored join due to term mismatch (expected: [10], actual: [9]) 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.Coordinator] failed to add Join{term=9, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}} - ignoring org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 9 does not match current term 10 at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:230) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinIgnoringExceptions(Coordinator.java:1254) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1234) [elasticsearch-7.17.5.jar:7.17.5] at java.util.Optional.ifPresent(Optional.java:183) [?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:707) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$8(Coordinator.java:594) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:219) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.ListenableActionFuture.executeListener(ListenableActionFuture.java:89) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.ListenableActionFuture.addListener(ListenableActionFuture.java:54) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.coordination.Coordinator$1.onResponse(Coordinator.java:633) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.coordination.Coordinator$1.onResponse(Coordinator.java:630) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.ActionListener$DelegatingActionListener.onResponse(ActionListener.java:186) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:43) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1471) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1549) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$DirectResponseChannel$1.run(TransportService.java:1534) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.5.jar:7.17.5] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:12:21 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=3, maxDelayMillis=400, delayMillis=642, ElectionScheduler{attempt=4, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election 2022.11.10 18:12:21 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:12:22 DEBUG es[][o.e.c.s.MasterService] took [629ms] to compute cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:12:22 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [129], source [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:12:22 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 10, version: 129, delta: master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}]} 2022.11.10 18:12:22 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [129] 2022.11.10 18:12:23 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [129] with size [5109] 2022.11.10 18:12:24 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [716ms]; wrote full state with [7] indices 2022.11.10 18:12:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=129}]: execute 2022.11.10 18:12:24 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [129], source [Publication{term=10, version=129}] 2022.11.10 18:12:24 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}]}, term: 10, version: 129, reason: Publication{term=10, version=129} 2022.11.10 18:12:24 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:12:24 DEBUG es[][o.e.c.NodeConnectionsService] connected to {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:12:24 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 129 2022.11.10 18:12:24 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 129 2022.11.10 18:12:25 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered 2022.11.10 18:12:25 DEBUG es[][o.e.g.GatewayService] performing state recovery... 2022.11.10 18:12:25 DEBUG es[][o.e.c.l.NodeAndClusterIdStateListener] Received cluster state update. Setting nodeId=[NoJ4WfHARK-DEgu5GKXOeg] and clusterUuid=[pThM6foASlO2PdfSbEivXw] 2022.11.10 18:12:25 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=129}]: took [1.6s] done applying updated cluster state (version: 129, uuid: FEivQ3VjRuWJ7N4q-Zxvww) 2022.11.10 18:12:25 DEBUG es[][o.e.c.c.JoinHelper] releasing [1] connections on successful cluster state application 2022.11.10 18:12:25 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=129} 2022.11.10 18:12:26 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=9, optionalJoin=Optional[Join{term=10, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] took [29ms] to notify listeners on successful publication of cluster state (version: 129, uuid: FEivQ3VjRuWJ7N4q-Zxvww) for [elected-as-master ([1] nodes joined)[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [node-join[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} join existing leader]] 2022.11.10 18:12:26 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [node-join[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} join existing leader]] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [130], source [node-join[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} join existing leader]] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [130] 2022.11.10 18:12:26 DEBUG es[][o.e.h.AbstractHttpServerTransport] Bound http to address {127.0.0.1:9001} 2022.11.10 18:12:26 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 18:12:26 INFO es[][o.e.n.Node] started 2022.11.10 18:12:26 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [130] with size [5110] 2022.11.10 18:12:26 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [237ms]; wrote global metadata [false] and metadata for [0] indices and skipped [7] unchanged indices 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=130}]: execute 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [130], source [Publication{term=10, version=130}] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 130 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 130 2022.11.10 18:12:26 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=130}]: took [0s] done applying updated cluster state (version: 130, uuid: 26drxZ6tRCO-uie2s3YMpw) 2022.11.10 18:12:26 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=130} 2022.11.10 18:12:26 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=8, optionalJoin=Optional[Join{term=9, lastAcceptedTerm=8, lastAcceptedVersion=128, sourceNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 130, uuid: 26drxZ6tRCO-uie2s3YMpw) for [node-join[{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw} join existing leader]] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 18:12:26 DEBUG es[][o.e.c.s.MasterService] took [21ms] to compute cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 18:12:27 DEBUG es[][o.e.c.s.MasterService] took [147ms] to notify listeners on unchanged cluster state for [cluster_reroute(post-join reroute)] 2022.11.10 18:12:27 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:12:27 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:12:27 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:12:27 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [local-gateway-elected-state] 2022.11.10 18:12:28 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][19] overhead, spent [271ms] collecting in the last [1s] 2022.11.10 18:12:28 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true 2022.11.10 18:12:28 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true 2022.11.10 18:12:28 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@da046de 2022.11.10 18:12:28 DEBUG es[][o.e.c.r.a.a.BalancedShardsAllocator] skipping rebalance due to in-flight shard/store fetches 2022.11.10 18:12:28 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4] 2022.11.10 18:12:28 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] shard state info found: [primary [true], allocation [[id=yFj13EkETzO7NoXsHJA0qQ]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] shard state info found: [primary [true], allocation [[id=EIDR4Z8sRbetiOgv_Gl6eg]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] shard state info found: [primary [true], allocation [[id=Hjj-ITd1RhefzuwGKvoIVg]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] shard state info found: [primary [true], allocation [[id=GMXXUqyWTgyZHRuUQEBQcg]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] shard state info found: [primary [true], allocation [[id=7bV3f8r_QgiH2TJFI0Tp9Q]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3] 2022.11.10 18:12:29 DEBUG es[][o.e.c.s.MasterService] took [2s] to compute cluster state update for [local-gateway-elected-state] 2022.11.10 18:12:29 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [131], source [local-gateway-elected-state] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0] 2022.11.10 18:12:29 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [131] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] shard state info found: [primary [true], allocation [[id=RC24NNOeRQCfrMHX0C34QA]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] shard state info found: [primary [true], allocation [[id=iYG4colRQrOYKH5Yc8PSZA]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] shard state info found: [primary [true], allocation [[id=2HS0XG1gQR6iBC5chKTVgg]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] shard state info found: [primary [true], allocation [[id=bVQrg9ZkS5yhnq7qEQFkGg]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] shard state info found: [primary [true], allocation [[id=mX6GCP9-Qw6hq_eiNMarnQ]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] shard state info found: [primary [true], allocation [[id=55kP9r7eSjaPkAx2Hput1A]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] shard state info found: [primary [true], allocation [[id=kSUj9IavSMSvCn04gk5QwQ]]] 2022.11.10 18:12:29 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] shard state info found: [primary [true], allocation [[id=gpYEHqhWQRebcYya4bLGXw]]] 2022.11.10 18:12:30 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [131] with size [5191] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] shard state info found: [primary [true], allocation [[id=1PnVxJHzRAyB7duGMdX5iQ]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] shard state info found: [primary [true], allocation [[id=40lmXE2jRQmia9KK1O9esg]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3] 2022.11.10 18:12:30 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [200ms]; wrote global metadata [false] and metadata for [0] indices and skipped [7] unchanged indices 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] shard state info found: [primary [true], allocation [[id=dMKfOEhTTS-1oGo7YS-QXQ]]] 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=131}]: execute 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [131], source [Publication{term=10, version=131}] 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 131 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] shard state info found: [primary [true], allocation [[id=cfnefW7VREqduM2gqZw7DA]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] shard state info found: [primary [true], allocation [[id=m26EZ-e2ShaXq3vCmyTSxA]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] shard state info found: [primary [true], allocation [[id=N1eAWJPOSXWHssmbXnJsZg]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] shard state info found: [primary [true], allocation [[id=cemOqkVMTri0ZpaAE400sQ]]] 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 131 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] shard state info found: [primary [true], allocation [[id=iDzORoXTRHWBllF7l5Vi-A]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] shard state info found: [primary [true], allocation [[id=yviWUVOQQHWeSrk4yU72dg]]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2] 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] shard state info found: [primary [true], allocation [[id=HP0H87wHSdm_mC_bEh6NLw]]] 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 131 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=131}]: took [469ms] done applying updated cluster state (version: 131, uuid: YQa-DvaFQAeLGtMJfMf02A) 2022.11.10 18:12:30 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=131} 2022.11.10 18:12:30 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] shard state info found: [primary [true], allocation [[id=bwBI1A3QSsKXe0EE5b0kDw]]] 2022.11.10 18:12:30 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 131, uuid: YQa-DvaFQAeLGtMJfMf02A) for [local-gateway-elected-state] 2022.11.10 18:12:30 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][4]: found 1 allocation candidates of [views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[yFj13EkETzO7NoXsHJA0qQ]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][4]: allocating [[views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][3]: found 1 allocation candidates of [views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[GMXXUqyWTgyZHRuUQEBQcg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][3]: allocating [[views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[Hjj-ITd1RhefzuwGKvoIVg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][2]: allocating [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[EIDR4Z8sRbetiOgv_Gl6eg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][0]: allocating [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[bVQrg9ZkS5yhnq7qEQFkGg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: throttling allocation [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4417a8a2]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@384aa867]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@53f2a017]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5ce58f93]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5cd48c03]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@94dbc5e]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@62f5609d]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4a151d55]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6141cecd]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@67b7d5ec]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9832102]] on primary allocation 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:30 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6a6e78f5]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7b1f9201]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4df27f4f]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@760965db]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3796c9c1]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5115ca35]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@73868d59]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@126029d0]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:31 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@747fe733]] on primary allocation 2022.11.10 18:12:31 DEBUG es[][o.e.c.s.MasterService] took [877ms] to compute cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:31 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [132], source [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:31 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [132] 2022.11.10 18:12:31 DEBUG es[][i.n.h.c.c.Brotli] brotli4j not in the classpath; Brotli support will be unavailable. 2022.11.10 18:12:32 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled 2022.11.10 18:12:32 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled 2022.11.10 18:12:32 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled 2022.11.10 18:12:32 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled 2022.11.10 18:12:32 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.delayedQueue.ratio: disabled 2022.11.10 18:12:32 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [132] with uuid [i0gE5GBURqysMeCMRrkm8w], diff size [1410] 2022.11.10 18:12:32 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [602ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:32 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=132}]: execute 2022.11.10 18:12:32 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [132], source [Publication{term=10, version=132}] 2022.11.10 18:12:32 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 132 2022.11.10 18:12:32 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 132 2022.11.10 18:12:32 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[views/Dv2T0qmGRX2UjXF3FDCAMw]] creating index 2022.11.10 18:12:32 DEBUG es[][o.e.i.IndicesService] creating Index [[views/Dv2T0qmGRX2UjXF3FDCAMw]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:36 DEBUG es[][o.e.i.m.MapperService] [[views/Dv2T0qmGRX2UjXF3FDCAMw]] added mapping [view], source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:12:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][4] creating shard with primary term [6] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/4, shard=[views][4]}] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][4] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][3] creating shard with primary term [6] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/3, shard=[views][3]}] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][3] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][2] creating shard with primary term [6] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/2, shard=[views][2]}] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][2] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][0] creating shard with primary term [6] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] [views][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/0, shard=[views][0]}] 2022.11.10 18:12:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][0] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 132 2022.11.10 18:12:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=132}]: took [4.2s] done applying updated cluster state (version: 132, uuid: i0gE5GBURqysMeCMRrkm8w) 2022.11.10 18:12:36 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=132} 2022.11.10 18:12:37 DEBUG es[][o.e.c.s.MasterService] took [239ms] to notify listeners on successful publication of cluster state (version: 132, uuid: i0gE5GBURqysMeCMRrkm8w) for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:37 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bVQrg9ZkS5yhnq7qEQFkGg]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: throttling allocation [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4aeb8d8b]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5f919d8c]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@30bb4557]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@171951f5]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@b2074a8]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ceeb2bf]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1d20326e]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1be4da1b]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7e15bae2]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@36328a3b]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@55957df3]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@21d3929a]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@795eeb7b]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@52bcd94a]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@89dcfd5]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@65cb774f]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3223388e]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@11edb329]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5b53f59f]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:37 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1e84bb4b]] on primary allocation 2022.11.10 18:12:37 DEBUG es[][o.e.c.s.MasterService] took [499ms] to compute cluster state update for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:37 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(async_shard_fetch)] 2022.11.10 18:12:37 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:37 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:37 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:37 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:38 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=T63vCAXZRlOTlm_6Jc7WgQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=KPWA8KOzQ-qVwxBjJw6Jzg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=T63vCAXZRlOTlm_6Jc7WgQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=KPWA8KOzQ-qVwxBjJw6Jzg}]}] 2022.11.10 18:12:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Kq7BjiMtSCCrJYG50SWb7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mA9bMTU4S1qbjwHqEfs8uA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Kq7BjiMtSCCrJYG50SWb7w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mA9bMTU4S1qbjwHqEfs8uA}]}] 2022.11.10 18:12:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=46dRMQDYRyKc0268f0X1yg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=pj7VEYXzSeq0DpVComlqWA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=46dRMQDYRyKc0268f0X1yg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=pj7VEYXzSeq0DpVComlqWA}]}] 2022.11.10 18:12:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=e8u_CbGkRxKcWgyRk8N7JA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xBiOmM3qTj64lr1Ir3YLuQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=e8u_CbGkRxKcWgyRk8N7JA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xBiOmM3qTj64lr1Ir3YLuQ}]}] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.4s] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [3s] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.5s] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] starting shard [views][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=yFj13EkETzO7NoXsHJA0qQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.4s] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] took [48ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [133], source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [133] 2022.11.10 18:12:39 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [133] with uuid [_Q5VMV5sSDeH6TVjwJlBXQ], diff size [1165] 2022.11.10 18:12:39 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [417ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=133}]: execute 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [133], source [Publication{term=10, version=133}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 133 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 133 2022.11.10 18:12:39 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 133 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=133}]: took [208ms] done applying updated cluster state (version: 133, uuid: _Q5VMV5sSDeH6TVjwJlBXQ) 2022.11.10 18:12:39 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=133} 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 133, uuid: _Q5VMV5sSDeH6TVjwJlBXQ) for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [yFj13EkETzO7NoXsHJA0qQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] starting shard [views][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=GMXXUqyWTgyZHRuUQEBQcg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] starting shard [views][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=Hjj-ITd1RhefzuwGKvoIVg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] starting shard [views][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=EIDR4Z8sRbetiOgv_Gl6eg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [134], source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:39 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [134] 2022.11.10 18:12:39 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [134] with uuid [P0FDhKwPSb-YoQ-cCPfWwQ], diff size [1153] 2022.11.10 18:12:40 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [257ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=134}]: execute 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [134], source [Publication{term=10, version=134}] 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 134 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 134 2022.11.10 18:12:40 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:40 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:40 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 134 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=134}]: took [221ms] done applying updated cluster state (version: 134, uuid: P0FDhKwPSb-YoQ-cCPfWwQ) 2022.11.10 18:12:40 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=134} 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 134, uuid: P0FDhKwPSb-YoQ-cCPfWwQ) for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [GMXXUqyWTgyZHRuUQEBQcg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [Hjj-ITd1RhefzuwGKvoIVg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [EIDR4Z8sRbetiOgv_Gl6eg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bVQrg9ZkS5yhnq7qEQFkGg]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/Dv2T0qmGRX2UjXF3FDCAMw]][1]: allocating [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7bV3f8r_QgiH2TJFI0Tp9Q]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/qbsuDVZZRrqTwp7HSmmTgw]][0]: allocating [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[2HS0XG1gQR6iBC5chKTVgg]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][3]: allocating [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[55kP9r7eSjaPkAx2Hput1A]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][2]: allocating [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@265ab6e6]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@67b11603]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1a9264de]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@33871eba]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@332b5d27]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2cfe1126]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4c1be270]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@ae04426]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3ee06722]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@382b0790]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@70857893]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3593ecb2]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2dd0d04f]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@712e3ef2]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7fc7a285]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:40 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@31d86bec]] on primary allocation 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.MasterService] took [345ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [135], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:40 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [135] 2022.11.10 18:12:40 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [135] with uuid [LvECjWC-Tvu0C0OaNpv3ng], diff size [1567] 2022.11.10 18:12:41 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [202ms]; wrote global metadata [false] and metadata for [3] indices and skipped [4] unchanged indices 2022.11.10 18:12:41 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=135}]: execute 2022.11.10 18:12:41 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [135], source [Publication{term=10, version=135}] 2022.11.10 18:12:41 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 135 2022.11.10 18:12:41 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 135 2022.11.10 18:12:41 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[issues/VZ6DTALkToeQgIcEr8PrtQ]] creating index 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/VZ6DTALkToeQgIcEr8PrtQ]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:41 DEBUG es[][o.e.i.m.MapperService] [[issues/VZ6DTALkToeQgIcEr8PrtQ]] added mapping [auth] (source suppressed due to length, use TRACE level if needed) 2022.11.10 18:12:41 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/qbsuDVZZRrqTwp7HSmmTgw]] creating index 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndicesService] creating Index [[users/qbsuDVZZRrqTwp7HSmmTgw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:41 DEBUG es[][o.e.i.m.MapperService] [[users/qbsuDVZZRrqTwp7HSmmTgw]] added mapping [user], source [{"user":{"dynamic":"false","properties":{"active":{"type":"boolean"},"email":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true},"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"login":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"name":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"scmAccounts":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:12:41 DEBUG es[][o.e.i.c.IndicesClusterStateService] [users][0] creating shard with primary term [6] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [users][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [users][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/qbsuDVZZRrqTwp7HSmmTgw/0, shard=[users][0]}] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] creating shard_id [users][0] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:41 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][3] creating shard with primary term [6] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [issues][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [issues][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/3, shard=[issues][3]}] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][3] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:41 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:41 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:41 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][2] creating shard with primary term [6] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [issues][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] [issues][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/2, shard=[issues][2]}] 2022.11.10 18:12:41 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][2] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:41 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qgQQIyaFRP6jzlvhAmpAWw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=veNPXwV0QUCKVfQz-nd6yw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qgQQIyaFRP6jzlvhAmpAWw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=veNPXwV0QUCKVfQz-nd6yw}]}] 2022.11.10 18:12:41 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:41 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:41 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [516ms] 2022.11.10 18:12:42 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:42 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] received shard started for [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:42 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][1] creating shard with primary term [6] 2022.11.10 18:12:42 DEBUG es[][o.e.i.IndexService] [views][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1] 2022.11.10 18:12:42 DEBUG es[][o.e.i.IndexService] [views][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Dv2T0qmGRX2UjXF3FDCAMw/1, shard=[views][1]}] 2022.11.10 18:12:42 DEBUG es[][o.e.i.IndexService] creating shard_id [views][1] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:42 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=oOYhg9IcS1ymuvzl3esPww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=ggglSbepSES53wK8dYCSGg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=oOYhg9IcS1ymuvzl3esPww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=ggglSbepSES53wK8dYCSGg}]}] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:42 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:42 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:42 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 135 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=135}]: took [1.8s] done applying updated cluster state (version: 135, uuid: LvECjWC-Tvu0C0OaNpv3ng) 2022.11.10 18:12:42 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=135} 2022.11.10 18:12:42 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 135, uuid: LvECjWC-Tvu0C0OaNpv3ng) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:42 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:42 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] starting shard [users][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=7bV3f8r_QgiH2TJFI0Tp9Q], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:27.685Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.MasterService] took [8ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [136], source [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:42 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [136] 2022.11.10 18:12:43 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AwggODDaTY6LQoqXqv7MAQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=EW3PbiTdS_-RpRXJ5SbRmw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AwggODDaTY6LQoqXqv7MAQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=EW3PbiTdS_-RpRXJ5SbRmw}]}] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [818ms] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2ZhINsZ9SQqQcb8bHMIucQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=PBVKxsRTS92ioqP89O3WSA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2ZhINsZ9SQqQcb8bHMIucQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=PBVKxsRTS92ioqP89O3WSA}]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [136] with uuid [AJ0RE0eLSEeKXwWxXkCzuA], diff size [1053] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.8s] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.5s] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:43 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [454ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=136}]: execute 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [136], source [Publication{term=10, version=136}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 136 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 136 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 136 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=136}]: took [0s] done applying updated cluster state (version: 136, uuid: AJ0RE0eLSEeKXwWxXkCzuA) 2022.11.10 18:12:43 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=136} 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 136, uuid: AJ0RE0eLSEeKXwWxXkCzuA) for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [7bV3f8r_QgiH2TJFI0Tp9Q], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] starting shard [views][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=bVQrg9ZkS5yhnq7qEQFkGg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] starting shard [issues][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=2HS0XG1gQR6iBC5chKTVgg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:43 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] starting shard [issues][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=55kP9r7eSjaPkAx2Hput1A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.MasterService] took [13ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [137], source [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:43 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [137] 2022.11.10 18:12:43 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [137] with uuid [OqxHpK-_S22Ada_g6vmflA], diff size [1369] 2022.11.10 18:12:44 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [431ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=137}]: execute 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [137], source [Publication{term=10, version=137}] 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 137 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 137 2022.11.10 18:12:44 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:44 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:44 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 137 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=137}]: took [201ms] done applying updated cluster state (version: 137, uuid: OqxHpK-_S22Ada_g6vmflA) 2022.11.10 18:12:44 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=137} 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 137, uuid: OqxHpK-_S22Ada_g6vmflA) for [shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [55kP9r7eSjaPkAx2Hput1A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2HS0XG1gQR6iBC5chKTVgg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [bVQrg9ZkS5yhnq7qEQFkGg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iYG4colRQrOYKH5Yc8PSZA]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][1]: allocating [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[kSUj9IavSMSvCn04gk5QwQ]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][4]: allocating [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[RC24NNOeRQCfrMHX0C34QA]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/VZ6DTALkToeQgIcEr8PrtQ]][0]: allocating [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[mX6GCP9-Qw6hq_eiNMarnQ]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][1]: allocating [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@8f9716b]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@158ef90]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7a3691ba]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@362048ff]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7f3bd63c]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ba2ea2f]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4ad09d8a]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@600ca525]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1cd08a1c]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5b095e24]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@27a9ff9]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:44 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2c1ed75]] on primary allocation 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.MasterService] took [303ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [138], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:44 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [138] 2022.11.10 18:12:44 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [138] with uuid [1KKGyPNWQsGNxZDys5UnFg], diff size [1357] 2022.11.10 18:12:45 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [239ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 18:12:45 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=138}]: execute 2022.11.10 18:12:45 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [138], source [Publication{term=10, version=138}] 2022.11.10 18:12:45 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 138 2022.11.10 18:12:45 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 138 2022.11.10 18:12:45 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[rules/ZPzru4r4QR6_c8p1MyrcNg]] creating index 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/ZPzru4r4QR6_c8p1MyrcNg]], shards [2]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:45 DEBUG es[][o.e.i.m.MapperService] [[rules/ZPzru4r4QR6_c8p1MyrcNg]] added mapping [rule], source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"activeRule_uuid":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":"activeRule"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"owaspTop10-2021":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"tags":{"type":"keyword","norms":true},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][1] creating shard with primary term [6] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/1, shard=[issues][1]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][1] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:45 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][4] creating shard with primary term [6] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/4, shard=[issues][4]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][4] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:45 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:45 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:45 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][0] creating shard with primary term [6] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [issues][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/VZ6DTALkToeQgIcEr8PrtQ/0, shard=[issues][0]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=H8YcgTCURdyk5RQhL4Jp-w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=fs5pnmQPS7usuEnewsT6MQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=H8YcgTCURdyk5RQhL4Jp-w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=fs5pnmQPS7usuEnewsT6MQ}]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][0] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:45 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][1] creating shard with primary term [6] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [rules][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] [rules][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/1, shard=[rules][1]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][1] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [454ms] 2022.11.10 18:12:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:45 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 138 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=138}]: took [1s] done applying updated cluster state (version: 138, uuid: 1KKGyPNWQsGNxZDys5UnFg) 2022.11.10 18:12:46 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=138} 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 138, uuid: 1KKGyPNWQsGNxZDys5UnFg) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] starting shard [issues][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=iYG4colRQrOYKH5Yc8PSZA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [139], source [shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [139] 2022.11.10 18:12:46 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [139] with uuid [1V--G1KeQuqQxgYP7L5j4A], diff size [1178] 2022.11.10 18:12:46 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=GVPpEHuGRzWynf49awoAYQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=twC3-fCcRry3dRvDncvIQQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=GVPpEHuGRzWynf49awoAYQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=twC3-fCcRry3dRvDncvIQQ}]}] 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [857ms] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:46 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=nDy6d2bXT9KdNZkQv0yzPA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vo-XBYstRhOO9vg6KQmHWQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=nDy6d2bXT9KdNZkQv0yzPA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vo-XBYstRhOO9vg6KQmHWQ}]}] 2022.11.10 18:12:46 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [417ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=139}]: execute 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [139], source [Publication{term=10, version=139}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 139 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 139 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=10, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=269, minTranslogGeneration=10, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:46 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.2s] 2022.11.10 18:12:46 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=10, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=269, minTranslogGeneration=10, trimmedAboveSeqNo=-2} 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 139 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=139}]: took [209ms] done applying updated cluster state (version: 139, uuid: 1V--G1KeQuqQxgYP7L5j4A) 2022.11.10 18:12:46 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=139} 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 139, uuid: 1V--G1KeQuqQxgYP7L5j4A) for [shard-started StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [iYG4colRQrOYKH5Yc8PSZA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] starting shard [issues][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=RC24NNOeRQCfrMHX0C34QA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] starting shard [issues][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=kSUj9IavSMSvCn04gk5QwQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]) 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] took [76ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [140], source [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:46 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [140] 2022.11.10 18:12:46 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [140] with uuid [Qo9JmSiFSw-51pjV1zQBIg], diff size [1149] 2022.11.10 18:12:47 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=269, max_seq_no=269, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=264, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}], last commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=269, max_seq_no=269, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=264, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}] 2022.11.10 18:12:47 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [683ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=140}]: execute 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [140], source [Publication{term=10, version=140}] 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 140 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 140 2022.11.10 18:12:47 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:47 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 140 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=140}]: took [0s] done applying updated cluster state (version: 140, uuid: Qo9JmSiFSw-51pjV1zQBIg) 2022.11.10 18:12:47 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=140} 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 140, uuid: Qo9JmSiFSw-51pjV1zQBIg) for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [RC24NNOeRQCfrMHX0C34QA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [kSUj9IavSMSvCn04gk5QwQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[40lmXE2jRQmia9KK1O9esg]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/ZPzru4r4QR6_c8p1MyrcNg]][0]: allocating [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[dMKfOEhTTS-1oGo7YS-QXQ]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][3]: allocating [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1PnVxJHzRAyB7duGMdX5iQ]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][0]: allocating [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6d1359d4]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2fe16d37]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@d9433d3]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1fdb4453]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@398c5f9c]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7f328dda]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6613761d]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@429d8776]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:47 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@47f71f9c]] on primary allocation 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.MasterService] took [14ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [141], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:47 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [141] 2022.11.10 18:12:47 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [141] with uuid [5q0Xtx-YQ_irdxt0rnkurA], diff size [1317] 2022.11.10 18:12:48 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [201ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=141}]: execute 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [141], source [Publication{term=10, version=141}] 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 141 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 141 2022.11.10 18:12:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]] creating index 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:48 DEBUG es[][o.e.i.m.MapperService] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"qualifier":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][3] creating shard with primary term [6] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/3, shard=[projectmeasures][3]}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][3] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][0] creating shard with primary term [6] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/0, shard=[projectmeasures][0]}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][0] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:48 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][0] creating shard with primary term [6] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [rules][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] [rules][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/ZPzru4r4QR6_c8p1MyrcNg/0, shard=[rules][0]}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][0] 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:48 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 141 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=141}]: took [427ms] done applying updated cluster state (version: 141, uuid: 5q0Xtx-YQ_irdxt0rnkurA) 2022.11.10 18:12:48 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=141} 2022.11.10 18:12:48 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 141, uuid: 5q0Xtx-YQ_irdxt0rnkurA) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UD1dSifsQ7ixBh1zBW2_HQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=dIfuYmiuQY6LNNdVHtxR-g}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UD1dSifsQ7ixBh1zBW2_HQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=dIfuYmiuQY6LNNdVHtxR-g}]}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fkH2jczrQG-tN4LnAUwQrw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=o4RdVT1aQZWOdjVmvaC52Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fkH2jczrQG-tN4LnAUwQrw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=o4RdVT1aQZWOdjVmvaC52Q}]}] 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=10, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=262, minTranslogGeneration=10, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=10, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=262, minTranslogGeneration=10, trimmedAboveSeqNo=-2} 2022.11.10 18:12:48 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][222] took [110.7ms] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [3.3s] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] starting shard [rules][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=mX6GCP9-Qw6hq_eiNMarnQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [142], source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [142] 2022.11.10 18:12:49 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [142] with uuid [_FiNwEJVTjqw7haxWoOrhQ], diff size [1092] 2022.11.10 18:12:48 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=262, max_seq_no=262, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=257, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}], last commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=262, max_seq_no=262, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=257, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [789ms] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [869ms] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [205ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=142}]: execute 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [142], source [Publication{term=10, version=142}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 142 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 142 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 142 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=142}]: took [200ms] done applying updated cluster state (version: 142, uuid: _FiNwEJVTjqw7haxWoOrhQ) 2022.11.10 18:12:49 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=142} 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 142, uuid: _FiNwEJVTjqw7haxWoOrhQ) for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [mX6GCP9-Qw6hq_eiNMarnQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] starting shard [projectmeasures][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=1PnVxJHzRAyB7duGMdX5iQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] starting shard [projectmeasures][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=dMKfOEhTTS-1oGo7YS-QXQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:49 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][214] took [2.2ms] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:49 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.4s] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] took [217ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [143], source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:49 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [143] 2022.11.10 18:12:49 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [143] with uuid [hj4W4lRrQ2CCL-e-p0l4rw], diff size [1115] 2022.11.10 18:12:50 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [639ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=143}]: execute 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [143], source [Publication{term=10, version=143}] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 143 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 143 2022.11.10 18:12:50 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:50 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:50 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:50 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 143 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=143}]: took [0s] done applying updated cluster state (version: 143, uuid: hj4W4lRrQ2CCL-e-p0l4rw) 2022.11.10 18:12:50 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=143} 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 143, uuid: hj4W4lRrQ2CCL-e-p0l4rw) for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [dMKfOEhTTS-1oGo7YS-QXQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [1PnVxJHzRAyB7duGMdX5iQ], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] starting shard [rules][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=40lmXE2jRQmia9KK1O9esg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] took [72ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [144], source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [144] 2022.11.10 18:12:50 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [144] with uuid [qp9Wo8MLR4GvIforIOaVeQ], diff size [1073] 2022.11.10 18:12:50 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [200ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=144}]: execute 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [144], source [Publication{term=10, version=144}] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 144 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 144 2022.11.10 18:12:50 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 144 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=144}]: took [0s] done applying updated cluster state (version: 144, uuid: qp9Wo8MLR4GvIforIOaVeQ) 2022.11.10 18:12:50 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=144} 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 144, uuid: qp9Wo8MLR4GvIforIOaVeQ) for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [40lmXE2jRQmia9KK1O9esg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cemOqkVMTri0ZpaAE400sQ]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][4]: allocating [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[gpYEHqhWQRebcYya4bLGXw]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][1]: allocating [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[cfnefW7VREqduM2gqZw7DA]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/SWp38y_dTeW_i3HApsNVsQ]][2]: allocating [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[iDzORoXTRHWBllF7l5Vi-A]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][0]: allocating [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@26e7e29b]] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6cd4c364]] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3427b20]] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@239a5abf]] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:50 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6c68864e]] on primary allocation 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] took [152ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [145], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:50 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [145] 2022.11.10 18:12:50 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [145] with uuid [p4U3VopMTq-IdPzXSAV1qQ], diff size [1380] 2022.11.10 18:12:51 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [413ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=145}]: execute 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [145], source [Publication{term=10, version=145}] 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 145 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 145 2022.11.10 18:12:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[components/z8jTFy28Rq2m0AWGxQGyuw]] creating index 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndicesService] creating Index [[components/z8jTFy28Rq2m0AWGxQGyuw]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:51 DEBUG es[][o.e.i.m.MapperService] [[components/z8jTFy28Rq2m0AWGxQGyuw]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"name":{"type":"text","store":true,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"term_vector":"with_positions_offsets","norms":false,"fielddata":true},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][4] creating shard with primary term [6] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/4, shard=[projectmeasures][4]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][4] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][1] creating shard with primary term [6] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/1, shard=[projectmeasures][1]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][1] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:51 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:51 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:51 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AyFJKMzIRDe5VKc_Ay_L4g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Kl9n0o9aRIW-KMGoK21NQw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AyFJKMzIRDe5VKc_Ay_L4g, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Kl9n0o9aRIW-KMGoK21NQw}]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][2] creating shard with primary term [6] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/SWp38y_dTeW_i3HApsNVsQ/2, shard=[projectmeasures][2]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][2] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [251ms] 2022.11.10 18:12:51 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:51 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:51 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][0] creating shard with primary term [6] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [components][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] [components][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/0, shard=[components][0]}] 2022.11.10 18:12:51 DEBUG es[][o.e.i.IndexService] creating shard_id [components][0] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:51 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:51 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:51 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 145 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=145}]: took [659ms] done applying updated cluster state (version: 145, uuid: p4U3VopMTq-IdPzXSAV1qQ) 2022.11.10 18:12:51 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=145} 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 145, uuid: p4U3VopMTq-IdPzXSAV1qQ) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:51 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] starting shard [projectmeasures][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=cemOqkVMTri0ZpaAE400sQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [146], source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:51 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [146] 2022.11.10 18:12:51 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [146] with uuid [Y5KQ-nPLQkej7vcW0iCfew], diff size [1186] 2022.11.10 18:12:52 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=65wTtjk_QG63BHUANNPapQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Zy5kvxUHQ7WAzcO6KJeKwA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=65wTtjk_QG63BHUANNPapQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Zy5kvxUHQ7WAzcO6KJeKwA}]}] 2022.11.10 18:12:52 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:52 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:52 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:52 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:52 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JUloPkDpSYS5ZP_rreqL4w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=oyC_YB9-RU-LgnK3o6Upow}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JUloPkDpSYS5ZP_rreqL4w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=oyC_YB9-RU-LgnK3o6Upow}]}] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [660ms] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [586ms] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AnRshew2R76U-976VFG9Pg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=85vWB2uCS8u_lINjv7DAjA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=AnRshew2R76U-976VFG9Pg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=85vWB2uCS8u_lINjv7DAjA}]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [520ms] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:52 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [414ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=146}]: execute 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [146], source [Publication{term=10, version=146}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 146 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 146 2022.11.10 18:12:52 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 146 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=146}]: took [200ms] done applying updated cluster state (version: 146, uuid: Y5KQ-nPLQkej7vcW0iCfew) 2022.11.10 18:12:52 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=146} 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 146, uuid: Y5KQ-nPLQkej7vcW0iCfew) for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [cemOqkVMTri0ZpaAE400sQ], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] starting shard [projectmeasures][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=gpYEHqhWQRebcYya4bLGXw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] starting shard [projectmeasures][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=cfnefW7VREqduM2gqZw7DA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:52 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] starting shard [components][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=iDzORoXTRHWBllF7l5Vi-A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.MasterService] took [123ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [147], source [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:52 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [147] 2022.11.10 18:12:52 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [147] with uuid [QjH4HlNJRfC6ymHvxPvU7g], diff size [1359] 2022.11.10 18:12:53 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [443ms]; wrote global metadata [false] and metadata for [2] indices and skipped [5] unchanged indices 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=147}]: execute 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [147], source [Publication{term=10, version=147}] 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 147 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 147 2022.11.10 18:12:53 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:53 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:53 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 147 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=147}]: took [0s] done applying updated cluster state (version: 147, uuid: QjH4HlNJRfC6ymHvxPvU7g) 2022.11.10 18:12:53 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=147} 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 147, uuid: QjH4HlNJRfC6ymHvxPvU7g) for [shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [iDzORoXTRHWBllF7l5Vi-A], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [gpYEHqhWQRebcYya4bLGXw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [cfnefW7VREqduM2gqZw7DA], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HP0H87wHSdm_mC_bEh6NLw]] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][2]: allocating [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[N1eAWJPOSXWHssmbXnJsZg]] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][1]: allocating [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[m26EZ-e2ShaXq3vCmyTSxA]] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][4]: allocating [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[yviWUVOQQHWeSrk4yU72dg]] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/z8jTFy28Rq2m0AWGxQGyuw]][3]: allocating [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:53 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7bd56b9f]] on primary allocation 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.MasterService] took [432ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [148], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:53 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [148] 2022.11.10 18:12:53 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [148] with uuid [rKq7ujm8SuSody2yt22uNw], diff size [1182] 2022.11.10 18:12:53 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][43] overhead, spent [212ms] collecting in the last [1s] 2022.11.10 18:12:54 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [452ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=148}]: execute 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [148], source [Publication{term=10, version=148}] 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 148 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 148 2022.11.10 18:12:54 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][1] creating shard with primary term [6] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][1] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][1] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/1, shard=[components][1]}] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] creating shard_id [components][1] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:54 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][4] creating shard with primary term [6] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][4] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][4] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/4, shard=[components][4]}] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] creating shard_id [components][4] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:54 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][3] creating shard with primary term [6] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][3] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][3] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/3, shard=[components][3]}] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] creating shard_id [components][3] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:54 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][2] creating shard with primary term [6] 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][2] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] [components][2] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/z8jTFy28Rq2m0AWGxQGyuw/2, shard=[components][2]}] 2022.11.10 18:12:54 DEBUG es[][o.e.i.IndexService] creating shard_id [components][2] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:54 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 148 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=148}]: took [407ms] done applying updated cluster state (version: 148, uuid: rKq7ujm8SuSody2yt22uNw) 2022.11.10 18:12:54 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=148} 2022.11.10 18:12:54 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 148, uuid: rKq7ujm8SuSody2yt22uNw) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Zb6npGWJRLmZx5WdmvT2Ag, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q6IFvbUKQxqdcluDGGemcg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=Zb6npGWJRLmZx5WdmvT2Ag, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q6IFvbUKQxqdcluDGGemcg}]}] 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=6, trimmedAboveSeqNo=-2} 2022.11.10 18:12:54 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hYweKZ-rRzuAmXC2I7lT8Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vn_3dqi7ScCkAWZ94yMXRg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hYweKZ-rRzuAmXC2I7lT8Q, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=vn_3dqi7ScCkAWZ94yMXRg}]}] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [656ms] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] starting shard [components][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=m26EZ-e2ShaXq3vCmyTSxA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [542ms] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.MasterService] took [78ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [149], source [shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [149] 2022.11.10 18:12:55 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [149] with uuid [lMyz8FBKRYeYX9YZ_lUEew], diff size [1185] 2022.11.10 18:12:55 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=aZ5QMMNLR3OjjtT9WV1V9w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=khYidmzQSlKnXvnRph5YPQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=aZ5QMMNLR3OjjtT9WV1V9w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=khYidmzQSlKnXvnRph5YPQ}]}] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [714ms] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=dlOCmUqfSq2uMWhUvrPJFQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=4Pe15kvVTBKSWFZrw1JOvg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=dlOCmUqfSq2uMWhUvrPJFQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=4Pe15kvVTBKSWFZrw1JOvg}]}] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1s] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:55 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [671ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=149}]: execute 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [149], source [Publication{term=10, version=149}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 149 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 149 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:12:55 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 149 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=149}]: took [421ms] done applying updated cluster state (version: 149, uuid: lMyz8FBKRYeYX9YZ_lUEew) 2022.11.10 18:12:56 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=149} 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 149, uuid: lMyz8FBKRYeYX9YZ_lUEew) for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [m26EZ-e2ShaXq3vCmyTSxA], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] starting shard [components][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=HP0H87wHSdm_mC_bEh6NLw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] starting shard [components][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=yviWUVOQQHWeSrk4yU72dg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:56 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] starting shard [components][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=N1eAWJPOSXWHssmbXnJsZg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.041Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [150], source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [150] 2022.11.10 18:12:56 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [150] with uuid [dFGXT6hTRbiqNIBQNIu5Yw], diff size [1155] 2022.11.10 18:12:56 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [431ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=150}]: execute 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [150], source [Publication{term=10, version=150}] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 150 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 150 2022.11.10 18:12:56 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:56 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:56 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 150 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=150}]: took [0s] done applying updated cluster state (version: 150, uuid: dFGXT6hTRbiqNIBQNIu5Yw) 2022.11.10 18:12:56 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=150} 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 150, uuid: dFGXT6hTRbiqNIBQNIu5Yw) for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [yviWUVOQQHWeSrk4yU72dg], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [N1eAWJPOSXWHssmbXnJsZg], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][2]], allocationId [HP0H87wHSdm_mC_bEh6NLw], primary term [6], message [master {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:56 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[bwBI1A3QSsKXe0EE5b0kDw]] 2022.11.10 18:12:56 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]][0]: allocating [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube}] on primary allocation 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] took [35ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [151], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:56 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [151] 2022.11.10 18:12:56 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [151] with uuid [LvLXxfb4TNWSn92MDyz80Q], diff size [1078] 2022.11.10 18:12:58 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1081ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=151}]: execute 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [151], source [Publication{term=10, version=151}] 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 151 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 151 2022.11.10 18:12:58 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]] creating index 2022.11.10 18:12:58 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:12:58 DEBUG es[][o.e.i.m.MapperService] [[metadatas/N-wJ8qPTTTyTYoXbXg3F8g]] added mapping [metadata], source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}] 2022.11.10 18:12:58 DEBUG es[][o.e.i.c.IndicesClusterStateService] [metadatas][0] creating shard with primary term [6] 2022.11.10 18:12:58 DEBUG es[][o.e.i.IndexService] [metadatas][0] loaded data path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0], state path [/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0] 2022.11.10 18:12:58 DEBUG es[][o.e.i.IndexService] [metadatas][0] creating using an existing path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/N-wJ8qPTTTyTYoXbXg3F8g/0, shard=[metadatas][0]}] 2022.11.10 18:12:58 DEBUG es[][o.e.i.IndexService] creating shard_id [metadatas][0] 2022.11.10 18:12:58 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:12:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:12:58 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:12:58 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 151 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=151}]: took [451ms] done applying updated cluster state (version: 151, uuid: LvLXxfb4TNWSn92MDyz80Q) 2022.11.10 18:12:58 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=151} 2022.11.10 18:12:58 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 151, uuid: LvLXxfb4TNWSn92MDyz80Q) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:58 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=7, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=16, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2022.11.10 18:12:58 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=7, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=16, minTranslogGeneration=7, trimmedAboveSeqNo=-2} 2022.11.10 18:12:58 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{es_version=7.17.5, history_uuid=_1n2s2ucRj6ZqrUkmzvO8Q, local_checkpoint=16, max_seq_no=16, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=0, translog_uuid=KUxcsAtZSbesfWRcAgdGWA}]}], last commit [CommitPoint{segment[segments_3], userData[{es_version=7.17.5, history_uuid=_1n2s2ucRj6ZqrUkmzvO8Q, local_checkpoint=16, max_seq_no=16, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=0, translog_uuid=KUxcsAtZSbesfWRcAgdGWA}]}] 2022.11.10 18:12:59 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:12:59 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [820ms] 2022.11.10 18:12:59 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [NoJ4WfHARK-DEgu5GKXOeg] for shard entry [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] received shard started for [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:59 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] starting shard [metadatas][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=bwBI1A3QSsKXe0EE5b0kDw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-11-10T17:12:28.042Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]) 2022.11.10 18:12:59 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] took [89ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [152], source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [152] 2022.11.10 18:12:59 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [152] with uuid [6vGEt8ksQNekg2q8ab3hZw], diff size [1055] 2022.11.10 18:12:59 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [418ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=152}]: execute 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [152], source [Publication{term=10, version=152}] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 152 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 152 2022.11.10 18:12:59 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 152 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=10, version=152}]: took [426ms] done applying updated cluster state (version: 152, uuid: 6vGEt8ksQNekg2q8ab3hZw) 2022.11.10 18:12:59 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=10, version=152} 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 152, uuid: 6vGEt8ksQNekg2q8ab3hZw) for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [bwBI1A3QSsKXe0EE5b0kDw], primary term [6], message [after existing store recovery; bootstrap_history_uuid=false]}]] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:12:59 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:13:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:13:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:13:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:13:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:13:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:13:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:13:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:13:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:13:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:13:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:13:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:13:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:13:27 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][75] overhead, spent [392ms] collecting in the last [1s] 2022.11.10 18:13:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:13:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:13:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:13:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:13:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:13:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:13:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:13:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:13:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:13:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:13:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:13:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:13:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:13:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:13:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:13:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:14:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:14:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:14:08 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:14:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:14:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:14:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:14:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:14:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:14:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:14:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:14:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:14:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:14:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:14:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:14:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:14:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:14:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:14:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:14:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:14:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:14:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:14:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:14:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:14:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:14:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:14:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:14:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:14:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:14:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:15:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:15:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:15:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:15:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:15:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:15:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:15:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:15:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:15:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:15:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:15:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:15:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:21 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:15:28 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:15:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:15:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:15:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:15:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:15:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:15:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:15:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:15:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:15:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:15:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:15:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:15:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:15:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:15:51 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:15:58 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:16:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:16:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:16:09 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:16:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:16:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:16:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:16:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:16:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:16:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:16:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:16:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:16:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:16:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:16:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:16:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:16:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:16:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:16:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:16:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:16:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:16:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:16:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:16:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:16:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:16:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:16:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:16:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:16:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:16:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:17:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:17:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:17:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:17:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:17:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:17:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:17:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:17:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:17:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:17:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:17:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:17:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:17:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:17:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:17:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:17:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=EIDR4Z8sRbetiOgv_Gl6eg] on inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=Hjj-ITd1RhefzuwGKvoIVg] on inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=GMXXUqyWTgyZHRuUQEBQcg] on inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=yFj13EkETzO7NoXsHJA0qQ] on inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [users][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=7bV3f8r_QgiH2TJFI0Tp9Q] on inactive 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:17:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:17:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:17:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [rules][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=mX6GCP9-Qw6hq_eiNMarnQ] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=RC24NNOeRQCfrMHX0C34QA] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=iYG4colRQrOYKH5Yc8PSZA] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=55kP9r7eSjaPkAx2Hput1A] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=2HS0XG1gQR6iBC5chKTVgg] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [issues][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=kSUj9IavSMSvCn04gk5QwQ] on inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:46 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [views][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=bVQrg9ZkS5yhnq7qEQFkGg] on inactive 2022.11.10 18:17:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:17:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:17:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:17:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:17:48 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:17:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [rules][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=40lmXE2jRQmia9KK1O9esg] on inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=1PnVxJHzRAyB7duGMdX5iQ] on inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=dMKfOEhTTS-1oGo7YS-QXQ] on inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:51 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=cemOqkVMTri0ZpaAE400sQ] on inactive 2022.11.10 18:17:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:17:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:17:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=iDzORoXTRHWBllF7l5Vi-A] on inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=N1eAWJPOSXWHssmbXnJsZg] on inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=HP0H87wHSdm_mC_bEh6NLw] on inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][3], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=yviWUVOQQHWeSrk4yU72dg] on inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][4], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=m26EZ-e2ShaXq3vCmyTSxA] on inactive 2022.11.10 18:17:56 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:57 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][1], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=gpYEHqhWQRebcYya4bLGXw] on inactive 2022.11.10 18:17:57 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:17:57 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][2], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=cfnefW7VREqduM2gqZw7DA] on inactive 2022.11.10 18:17:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:18:02 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:18:02 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [metadatas][0], node[NoJ4WfHARK-DEgu5GKXOeg], [P], s[STARTED], a[id=bwBI1A3QSsKXe0EE5b0kDw] on inactive 2022.11.10 18:18:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:18:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:18:09 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:11 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:18:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:18:15 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:18:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:18:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:18:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:18:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:18:18 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:18:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:18:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:18:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:18:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:18:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:18:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:18:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:18:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:18:45 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:18:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:18:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:18:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:18:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:18:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:18:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:18:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:18:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:18:59 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:19:07 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][young][395][30] duration [601ms], collections [1]/[1.4s], total [601ms]/[4.8s], memory [83.6mb]->[63mb]/[512mb], all_pools {[young] [21mb]->[0b]/[0b]}{[old] [59.6mb]->[60mb]/[512mb]}{[survivor] [3mb]->[3mb]/[0b]} 2022.11.10 18:19:07 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][395] overhead, spent [601ms] collecting in the last [1.4s] 2022.11.10 18:19:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:19:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:19:16 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=263, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:19:16 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=5, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668097167652, source='peer recovery'}}}] 2022.11.10 18:19:19 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:19:19 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:19:19 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:19:19 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:19:19 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:19:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:19:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:24 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:19:29 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=17, timestamp=1668083331242, source='peer recovery'}}}] 2022.11.10 18:19:30 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][222] took [3.9ms] 2022.11.10 18:19:32 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_rules#rule][214] took [14ms] 2022.11.10 18:19:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083308598, source='peer recovery'}}}] 2022.11.10 18:19:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083309023, source='peer recovery'}}}] 2022.11.10 18:19:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083310336, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083303471, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083294815, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083295757, source='peer recovery'}}}] 2022.11.10 18:19:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083297866, source='peer recovery'}}}] 2022.11.10 18:19:46 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=6, version=6, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=270, timestamp=1668100786053, source='peer recovery'}}}] 2022.11.10 18:19:46 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=6, version=6, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=275, timestamp=1668100786053, source='peer recovery'}}}] 2022.11.10 18:19:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:19:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:19:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271879, source='peer recovery'}}}] 2022.11.10 18:19:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083271409, source='peer recovery'}}}] 2022.11.10 18:19:49 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083273781, source='peer recovery'}}}] 2022.11.10 18:19:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083260441, source='peer recovery'}}}] 2022.11.10 18:19:54 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083261424, source='peer recovery'}}}] 2022.11.10 18:19:55 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/NoJ4WfHARK-DEgu5GKXOeg=RetentionLease{id='peer_recovery/NoJ4WfHARK-DEgu5GKXOeg', retainingSequenceNumber=0, timestamp=1668083263270, source='peer recovery'}}}] 2022.11.10 18:19:59 INFO es[][o.e.n.Node] stopping ... 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [projectmeasures] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [issues] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [issues/VZ6DTALkToeQgIcEr8PrtQ] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [components] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [components/z8jTFy28Rq2m0AWGxQGyuw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [projectmeasures/SWp38y_dTeW_i3HApsNVsQ] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [metadatas] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [metadatas/N-wJ8qPTTTyTYoXbXg3F8g] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [views] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [views/Dv2T0qmGRX2UjXF3FDCAMw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:19:59 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:19:59 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [metadatas/N-wJ8qPTTTyTYoXbXg3F8g] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [users] closing ... (reason [SHUTDOWN]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndicesService] [users/qbsuDVZZRrqTwp7HSmmTgw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:19:59 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:19:59 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:00 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndicesService] [users/qbsuDVZZRrqTwp7HSmmTgw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndicesService] [rules] closing ... (reason [SHUTDOWN]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndicesService] [rules/ZPzru4r4QR6_c8p1MyrcNg] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:00 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:00 DEBUG es[][o.e.i.IndicesService] [views/Dv2T0qmGRX2UjXF3FDCAMw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:00 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:01 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:01 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:20:01 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:01 DEBUG es[][o.e.i.IndicesService] [components/z8jTFy28Rq2m0AWGxQGyuw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:01 DEBUG es[][o.e.i.IndicesService] [issues/VZ6DTALkToeQgIcEr8PrtQ] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:01 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:01 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:01 DEBUG es[][o.e.i.IndicesService] [projectmeasures/SWp38y_dTeW_i3HApsNVsQ] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:01 DEBUG es[][o.e.a.a.c.n.s.TransportNodesStatsAction] failed to execute on node [NoJ4WfHARK-DEgu5GKXOeg] org.elasticsearch.transport.SendRequestTransportException: [sonarqube][127.0.0.1:34357][cluster:monitor/nodes/stats[n]] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:988) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:874) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:797) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:246) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:123) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:40) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:73) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:708) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesStats(AbstractClient.java:806) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$AsyncRefresh.execute(InternalClusterInfoService.java:174) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService.refreshAsync(InternalClusterInfoService.java:422) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$RefreshScheduler.lambda$getListener$0(InternalClusterInfoService.java:383) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.threadpool.ThreadPool$1.run(ThreadPool.java:444) [elasticsearch-7.17.5.jar:7.17.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: org.elasticsearch.node.NodeClosedException: node closed {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:969) ~[elasticsearch-7.17.5.jar:7.17.5] ... 24 more 2022.11.10 18:20:01 DEBUG es[][o.e.a.a.i.s.TransportIndicesStatsAction] failed to execute [indices:monitor/stats] on node [NoJ4WfHARK-DEgu5GKXOeg] org.elasticsearch.transport.SendRequestTransportException: [sonarqube][127.0.0.1:34357][indices:monitor/stats[n]] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:988) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:874) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:797) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.sendNodeRequest(TransportBroadcastByNodeAction.java:349) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.start(TransportBroadcastByNodeAction.java:335) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:258) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:69) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:73) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.stats(AbstractClient.java:1629) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$AsyncRefresh.execute(InternalClusterInfoService.java:216) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService.refreshAsync(InternalClusterInfoService.java:422) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$RefreshScheduler.lambda$getListener$0(InternalClusterInfoService.java:383) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.threadpool.ThreadPool$1.run(ThreadPool.java:444) [elasticsearch-7.17.5.jar:7.17.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: org.elasticsearch.node.NodeClosedException: node closed {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:969) ~[elasticsearch-7.17.5.jar:7.17.5] ... 25 more 2022.11.10 18:20:01 WARN es[][o.e.c.InternalClusterInfoService] failed to retrieve stats for node [NoJ4WfHARK-DEgu5GKXOeg] org.elasticsearch.transport.SendRequestTransportException: [sonarqube][127.0.0.1:34357][cluster:monitor/nodes/stats[n]] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:988) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:874) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:797) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:246) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:123) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:40) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:73) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:708) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesStats(AbstractClient.java:806) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$AsyncRefresh.execute(InternalClusterInfoService.java:174) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService.refreshAsync(InternalClusterInfoService.java:422) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$RefreshScheduler.lambda$getListener$0(InternalClusterInfoService.java:383) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.threadpool.ThreadPool$1.run(ThreadPool.java:444) ~[elasticsearch-7.17.5.jar:7.17.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: org.elasticsearch.node.NodeClosedException: node closed {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:969) ~[elasticsearch-7.17.5.jar:7.17.5] ... 24 more 2022.11.10 18:20:02 WARN es[][o.e.c.InternalClusterInfoService] failed to retrieve shard stats from node [NoJ4WfHARK-DEgu5GKXOeg] org.elasticsearch.action.FailedNodeException: Failed node [NoJ4WfHARK-DEgu5GKXOeg] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.onNodeFailure(TransportBroadcastByNodeAction.java:398) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction$1.handleException(TransportBroadcastByNodeAction.java:367) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$4.handleException(TransportService.java:853) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1481) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService$5.doRun(TransportService.java:1016) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:288) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:990) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:874) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:797) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.sendNodeRequest(TransportBroadcastByNodeAction.java:349) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.start(TransportBroadcastByNodeAction.java:335) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:258) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:69) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:73) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.stats(AbstractClient.java:1629) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$AsyncRefresh.execute(InternalClusterInfoService.java:216) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService.refreshAsync(InternalClusterInfoService.java:422) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.InternalClusterInfoService$RefreshScheduler.lambda$getListener$0(InternalClusterInfoService.java:383) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.threadpool.ThreadPool$1.run(ThreadPool.java:444) [elasticsearch-7.17.5.jar:7.17.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: org.elasticsearch.transport.SendRequestTransportException: [sonarqube][127.0.0.1:34357][indices:monitor/stats[n]] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:988) ~[elasticsearch-7.17.5.jar:7.17.5] ... 25 more Caused by: org.elasticsearch.node.NodeClosedException: node closed {sonarqube}{NoJ4WfHARK-DEgu5GKXOeg}{FZuQHGK4QEmWI50qT6Z-LQ}{127.0.0.1}{127.0.0.1:34357}{cdfhimrsw}{rack_id=sonarqube} at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:969) ~[elasticsearch-7.17.5.jar:7.17.5] ... 25 more 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_7], userData[{local_checkpoint=269, max_unsafe_auto_id_timestamp=-1, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ, min_retained_seq_no=263, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, es_version=7.17.5, max_seq_no=269}]}], last commit [CommitPoint{segment[segments_7], userData[{local_checkpoint=269, max_unsafe_auto_id_timestamp=-1, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ, min_retained_seq_no=263, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, es_version=7.17.5, max_seq_no=269}]}] 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=rQ8wy4RFQKOWF5FBzTyM0A, local_checkpoint=262, max_seq_no=262, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=257, translog_uuid=RXIm6O9fSW2z7KP4FL2biQ}]}] 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:02 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:02 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:02 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:20:02 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:20:02 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_7], userData[{local_checkpoint=274, max_unsafe_auto_id_timestamp=-1, translog_uuid=A-DhRTu6R06UjeegIwa7MA, min_retained_seq_no=270, history_uuid=ay94HNIbT4Cqi04CJWiRAA, es_version=7.17.5, max_seq_no=274}]}], last commit [CommitPoint{segment[segments_7], userData[{local_checkpoint=274, max_unsafe_auto_id_timestamp=-1, translog_uuid=A-DhRTu6R06UjeegIwa7MA, min_retained_seq_no=270, history_uuid=ay94HNIbT4Cqi04CJWiRAA, es_version=7.17.5, max_seq_no=274}]}] 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_6], userData[{es_version=7.17.5, history_uuid=ay94HNIbT4Cqi04CJWiRAA, local_checkpoint=269, max_seq_no=269, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=264, translog_uuid=A-DhRTu6R06UjeegIwa7MA}]}] 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:20:02 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:20:02 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:20:02 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:20:02 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:20:02 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:02 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:20:02 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:20:02 DEBUG es[][o.e.i.IndicesService] [rules/ZPzru4r4QR6_c8p1MyrcNg] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:20:02 INFO es[][o.e.n.Node] stopped 2022.11.10 18:20:02 INFO es[][o.e.n.Node] closing ... 2022.11.10 18:20:02 INFO es[][o.e.n.Node] closed 2022.11.10 18:29:01 DEBUG es[][o.e.b.SystemCallFilter] Linux seccomp filter installation successful, threads: [all] 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] java.class.path: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar:/home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] sun.boot.class.path: null 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:02 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:03 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:03 DEBUG es[][o.e.c.n.IfConfig] configuration: lo inet 127.0.0.1 netmask:255.0.0.0 scope:host inet6 ::1 prefixlen:128 scope:host UP LOOPBACK mtu:65536 index:1 eth0 inet 192.168.161.171 netmask:255.255.255.0 broadcast:192.168.161.255 scope:site hardware 00:50:56:A3:2C:79 UP MULTICAST mtu:1500 index:2 2022.11.10 18:29:08 INFO es[][o.e.n.Node] version[7.17.5], pid[1798], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.14.21-150400.24.28-default/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.16/11.0.16+8-suse-150000.3.83.1-x8664] 2022.11.10 18:29:08 INFO es[][o.e.n.Node] JVM home [/usr/lib64/jvm/java-11-openjdk-11] 2022.11.10 18:29:09 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/home/chili/sonarqube-9.7.0.61563/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/chili/sonarqube-9.7.0.61563/elasticsearch, -Des.path.conf=/home/chili/sonarqube-9.7.0.61563/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.11.10 18:29:09 DEBUG es[][o.e.n.Node] using config [/home/chili/sonarqube-9.7.0.61563/temp/conf/es], data [[/home/chili/sonarqube-9.7.0.61563/data/es7]], logs [/home/chili/sonarqube-9.7.0.61563/logs], plugins [/home/chili/sonarqube-9.7.0.61563/elasticsearch/plugins] 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-7.2.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-util-7.2.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:10 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-commons-7.2.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/lang-painless-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-tree-7.2.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/asm-analysis-7.2.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:11 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:12 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar 2022.11.10 18:29:12 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:12 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:12 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:12 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:15 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:15 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/parent-join/parent-join-client-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:16 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-ssl-config-7.17.5.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-logging-1.1.3.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:17 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-4.4.12.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/elasticsearch-rest-client-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpasyncclient-4.1.4.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpclient-4.5.10.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/reindex-client-7.17.5.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/httpcore-nio-4.4.12.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:18 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/reindex/commons-codec-1.11.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/lang-painless/spi/elasticsearch-scripting-painless-spi-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/analysis-common/analysis-common-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:19 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:20 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:21 DEBUG es[][o.e.j.JarHell] java.home: /usr/lib64/jvm/java-11-openjdk-11 2022.11.10 18:29:21 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-core-8.11.1.jar 2022.11.10 18:29:21 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/joda-time-2.10.10.jar 2022.11.10 18:29:21 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-buffer-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/snakeyaml-1.26.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-yaml-2.10.4.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-7.17.5.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-backward-codecs-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-spatial3d-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jna-5.10.0.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-smile-2.10.4.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-cli-7.17.5.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-memory-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-join-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-lz4-7.17.5.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lz4-java-1.8.0.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-log4j-7.17.5.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-analyzers-common-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/t-digest-3.2.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/hppc-0.8.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-resolver-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-codec-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-transport-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queries-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-handler-4.1.66.Final.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-core-2.10.4.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-sandbox-8.11.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/log4j-api-2.17.1.jar 2022.11.10 18:29:22 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-plugin-classloader-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/java-version-checker-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-x-content-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-geo-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-highlighter-8.11.1.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-launchers-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jackson-dataformat-cbor-2.10.4.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-core-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/jopt-simple-5.0.2.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/HdrHistogram-2.1.9.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-misc-8.11.1.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-grouping-8.11.1.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-suggest-8.11.1.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/transport-netty4-client-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/elasticsearch-secure-sm-7.17.5.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/modules/transport-netty4/netty-common-4.1.66.Final.jar 2022.11.10 18:29:23 DEBUG es[][o.e.j.JarHell] examining jar: /home/chili/sonarqube-9.7.0.61563/elasticsearch/lib/lucene-queryparser-8.11.1.jar 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.11.10 18:29:23 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.11.10 18:29:24 DEBUG es[][o.e.e.NodeEnvironment] using node location [[DataPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, indicesPath=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices, fileStore=/ (/dev/sda2), majorDeviceNumber=8, minorDeviceNumber=2}]], local_lock_id [0] 2022.11.10 18:29:24 DEBUG es[][o.e.e.NodeEnvironment] node data locations details: -> /home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0, free_space [22.4gb], usable_space [19.7gb], total_space [51gb], mount [/ (/dev/sda2)], type [ext4] 2022.11.10 18:29:24 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.11.10 18:29:26 INFO es[][o.e.n.Node] node name [sonarqube], node ID [W5aIUzOyQZ2OIplyipcVyA], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.11.10 18:29:26 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_coordination], size [1], queue size [1k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot_meta], core [1], max [6], keep alive [30s] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [4], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_write], size [1], queue size [1.5k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [listener], size [1], queue size [unbounded] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [1], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_write], size [1], queue size [1k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [1], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [auto_complete], size [1], queue size [100] 2022.11.10 18:29:27 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search] will adjust queue by [50] when determining automatic queue size 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [4], queue size [1k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [1], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [4], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [2], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [2], queue size [1k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_read], size [1], queue size [2k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_read], size [1], queue size [2k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [2], queue size [10k] 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [1], keep alive [5m] 2022.11.10 18:29:27 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search_throttled] will adjust queue by [50] when determining automatic queue size 2022.11.10 18:29:27 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100] 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.l.InternalLoggerFactory] Using Log4J2 as the default logging framework 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe) 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 11 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): unavailable 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 536870912 bytes (maybe) 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /home/chili/sonarqube-9.7.0.61563/temp (java.io.tmpdir) 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model) 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1 2022.11.10 18:29:28 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable at io.netty.util.internal.CleanerJava9.(CleanerJava9.java:68) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:193) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.ConstantPool.(ConstantPool.java:34) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey$1.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.AttributeKey.(AttributeKey.java:27) [netty-common-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpServerTransport.(Netty4HttpServerTransport.java:294) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:45) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) [?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) [?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?] at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:84) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:483) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.node.Node.(Node.java:309) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.5.jar:7.17.5] 2022.11.10 18:29:29 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true 2022.11.10 18:30:24 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [3000], expire [0s] 2022.11.10 18:30:38 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [-1] 2022.11.10 18:30:51 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] 2022.11.10 18:30:51 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s] 2022.11.10 18:30:51 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s] 2022.11.10 18:30:51 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s] 2022.11.10 18:30:51 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s] 2022.11.10 18:30:51 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] 2022.11.10 18:30:51 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2] 2022.11.10 18:30:53 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] 2022.11.10 18:30:53 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [51.1mb] max filter count [10000] 2022.11.10 18:30:53 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [51.1mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] 2022.11.10 18:31:01 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple 2022.11.10 18:31:01 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4 2022.11.10 18:31:01 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.11.10 18:31:01 DEBUG es[][o.e.h.n.Netty4HttpServerTransport] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000] 2022.11.10 18:31:01 INFO es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] 2022.11.10 18:31:02 DEBUG es[][o.e.d.SettingsBasedSeedHostsProvider] using initial hosts [127.0.0.1] 2022.11.10 18:31:02 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.11.10 18:31:09 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.11.10 18:31:13 DEBUG es[][o.e.n.Node] initializing HTTP handlers ... 2022.11.10 18:31:14 INFO es[][o.e.n.Node] initialized 2022.11.10 18:31:14 INFO es[][o.e.n.Node] starting ... 2022.11.10 18:31:14 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 4 2022.11.10 18:31:15 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 2022.11.10 18:31:15 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 2022.11.10 18:31:15 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true 2022.11.10 18:31:15 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512 2022.11.10 18:31:15 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable 2022.11.10 18:31:16 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[2], port[37767], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb] 2022.11.10 18:31:16 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1] 2022.11.10 18:31:16 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1798 (auto-detected) 2022.11.10 18:31:16 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false 2022.11.10 18:31:16 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false 2022.11.10 18:31:16 DEBUG es[][i.n.u.NetUtilInitializations] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo) 2022.11.10 18:31:16 DEBUG es[][i.n.u.NetUtil] /proc/sys/net/core/somaxconn: 4096 2022.11.10 18:31:16 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:a3:2c:79 (auto-detected) 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 4 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true 2022.11.10 18:31:17 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 2022.11.10 18:31:17 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled 2022.11.10 18:31:17 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0 2022.11.10 18:31:17 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384 2022.11.10 18:31:17 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:37767} 2022.11.10 18:31:17 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:37767}, bound_addresses {127.0.0.1:37767} 2022.11.10 18:31:21 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1982ms]; wrote full state with [0] indices 2022.11.10 18:31:21 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks 2022.11.10 18:31:21 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s] 2022.11.10 18:31:21 DEBUG es[][o.e.t.TransportService] now accepting incoming requests 2022.11.10 18:31:21 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 0 (was null, lastKnownLeader was [Optional.empty]) 2022.11.10 18:31:21 INFO es[][o.e.c.c.Coordinator] setting initial configuration to VotingConfiguration{W5aIUzOyQZ2OIplyipcVyA} 2022.11.10 18:31:22 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [652ms]; wrote global metadata [true] and metadata for [0] indices and skipped [0] unchanged indices 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=34, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=34, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=626, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:31:22 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s] 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=0}, isClosed=false} requesting pre-votes from [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=0, lastAcceptedTerm=0, lastAcceptedVersion=0}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=0}, isClosed=false} added PreVoteResponse{currentTerm=0, lastAcceptedTerm=0, lastAcceptedVersion=0} from {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, starting election 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=1,node={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}] with term 1 2022.11.10 18:31:22 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [0] due to StartJoinRequest{term=1,node={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}} 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=1,node={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}} from [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=0 lastAcceptedVersion=0 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [1] with VoteCollection{votes=[W5aIUzOyQZ2OIplyipcVyA], joins=[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 1 (was CANDIDATE, lastKnownLeader was [Optional.empty]) 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=626, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=2, maxDelayMillis=300, delayMillis=666, ElectionScheduler{attempt=3, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} 2022.11.10 18:31:23 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}] 2022.11.10 18:31:23 DEBUG es[][o.e.c.s.MasterService] took [185ms] to compute cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:31:23 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [1], source [elected-as-master ([1] nodes joined)[{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:31:23 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}]} 2022.11.10 18:31:23 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [1] 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=2, maxDelayMillis=300, delayMillis=666, ElectionScheduler{attempt=3, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election 2022.11.10 18:31:23 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [1] with size [344] 2022.11.10 18:31:24 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [651ms]; wrote full state with [0] indices 2022.11.10 18:31:24 INFO es[][o.e.c.c.CoordinationState] cluster UUID set to [znuYE-gDTMC-ZKXNjTuBpQ] 2022.11.10 18:31:24 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [446ms]; wrote global metadata [true] and metadata for [0] indices and skipped [0] unchanged indices 2022.11.10 18:31:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=1}]: execute 2022.11.10 18:31:24 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [1], source [Publication{term=1, version=1}] 2022.11.10 18:31:24 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}]}, term: 1, version: 1, reason: Publication{term=1, version=1} 2022.11.10 18:31:24 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:31:24 DEBUG es[][o.e.c.NodeConnectionsService] connected to {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} 2022.11.10 18:31:24 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 1 2022.11.10 18:31:24 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 1 2022.11.10 18:31:25 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered 2022.11.10 18:31:25 DEBUG es[][o.e.c.l.NodeAndClusterIdStateListener] Received cluster state update. Setting nodeId=[W5aIUzOyQZ2OIplyipcVyA] and clusterUuid=[znuYE-gDTMC-ZKXNjTuBpQ] 2022.11.10 18:31:25 DEBUG es[][o.e.g.GatewayService] performing state recovery... 2022.11.10 18:31:25 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=1}]: took [426ms] done applying updated cluster state (version: 1, uuid: 8Nmv-IQUTX-HehdHq62q9Q) 2022.11.10 18:31:25 DEBUG es[][o.e.c.c.JoinHelper] releasing [1] connections on successful cluster state application 2022.11.10 18:31:25 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=1} 2022.11.10 18:31:25 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube}}]} 2022.11.10 18:31:25 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 1, uuid: 8Nmv-IQUTX-HehdHq62q9Q) for [elected-as-master ([1] nodes joined)[{sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] 2022.11.10 18:31:25 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 18:31:25 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][10] overhead, spent [165ms] collecting in the last [1s] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [533ms] to compute cluster state update for [cluster_reroute(post-join reroute)] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [cluster_reroute(post-join reroute)] 2022.11.10 18:31:26 DEBUG es[][o.e.h.AbstractHttpServerTransport] Bound http to address {127.0.0.1:9001} 2022.11.10 18:31:26 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:31:26 INFO es[][o.e.n.Node] started 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [update snapshot after shards started [false] or node configuration changed [true]] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [local-gateway-elected-state] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [12ms] to compute cluster state update for [local-gateway-elected-state] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [2], source [local-gateway-elected-state] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [2] 2022.11.10 18:31:26 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [2] with size [296] 2022.11.10 18:31:26 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [0] indices and skipped [0] unchanged indices 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=2}]: execute 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [2], source [Publication{term=1, version=2}] 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 2 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 2 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 2 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=2}]: took [475ms] done applying updated cluster state (version: 2, uuid: wwhb0ItGSSivlY7zR09jgQ) 2022.11.10 18:31:26 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=2} 2022.11.10 18:31:26 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state 2022.11.10 18:31:26 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 2, uuid: wwhb0ItGSSivlY7zR09jgQ) for [local-gateway-elected-state] 2022.11.10 18:31:28 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true 2022.11.10 18:31:28 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true 2022.11.10 18:31:28 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@207c9a5e 2022.11.10 18:31:28 DEBUG es[][i.n.h.c.c.Brotli] brotli4j not in the classpath; Brotli support will be unavailable. 2022.11.10 18:31:29 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled 2022.11.10 18:31:29 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled 2022.11.10 18:31:29 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled 2022.11.10 18:31:29 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled 2022.11.10 18:31:29 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.delayedQueue.ratio: disabled 2022.11.10 18:33:15 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:35:15 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:35:52 DEBUG es[][r.suppressed] path: /metadatas, params: {index=metadatas} org.elasticsearch.index.IndexNotFoundException: no such index [metadatas] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:35:54 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [metadatas], cause [api]] 2022.11.10 18:35:55 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:35:56 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/Oth1bCT6T3iI8grVY9VGqw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:36:07 INFO es[][o.e.c.m.MetadataCreateIndexService] [metadatas] creating index, cause [api], templates [], shards [1]/[0] 2022.11.10 18:36:09 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:09 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:36:09 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:09 WARN es[][o.e.c.s.MasterService] took [14.5s/14509ms] to compute cluster state update for [create-index [metadatas], cause [api]], which exceeds the warn threshold of [10s] 2022.11.10 18:36:09 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [3], source [create-index [metadatas], cause [api]] 2022.11.10 18:36:09 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [3] 2022.11.10 18:36:10 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [3] with uuid [TCyYmHiQSeSnJhELCXyHWg], diff size [1069] 2022.11.10 18:36:11 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [947ms]; wrote global metadata [false] and metadata for [1] indices and skipped [0] unchanged indices 2022.11.10 18:36:11 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=3}]: execute 2022.11.10 18:36:11 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [3], source [Publication{term=1, version=3}] 2022.11.10 18:36:11 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 3 2022.11.10 18:36:11 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 3 2022.11.10 18:36:11 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[metadatas/Oth1bCT6T3iI8grVY9VGqw]] creating index 2022.11.10 18:36:11 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/Oth1bCT6T3iI8grVY9VGqw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:36:11 DEBUG es[][o.e.i.c.IndicesClusterStateService] [metadatas][0] creating shard with primary term [1] 2022.11.10 18:36:12 DEBUG es[][o.e.i.IndexService] [metadatas][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/Oth1bCT6T3iI8grVY9VGqw/0, shard=[metadatas][0]}] 2022.11.10 18:36:12 DEBUG es[][o.e.i.IndexService] creating shard_id [metadatas][0] 2022.11.10 18:36:12 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:13 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 3 2022.11.10 18:36:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=3}]: took [4s] done applying updated cluster state (version: 3, uuid: TCyYmHiQSeSnJhELCXyHWg) 2022.11.10 18:36:15 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=3} 2022.11.10 18:36:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:15 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 3, uuid: TCyYmHiQSeSnJhELCXyHWg) for [create-index [metadatas], cause [api]] 2022.11.10 18:36:16 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:17 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=t7BGTBaSTu-5opvZ-fZOrA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Y5A402O6SESH0KTtofqQdA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=t7BGTBaSTu-5opvZ-fZOrA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Y5A402O6SESH0KTtofqQdA}]}] 2022.11.10 18:36:17 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:17 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [5.6s] 2022.11.10 18:36:17 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:17 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] received shard started for [StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:17 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:17 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] starting shard [metadatas][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=gG0gsDNPQh6z6gRo9DUdNA], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:07.926Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:17 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metadatas][0]]]). 2022.11.10 18:36:17 DEBUG es[][o.e.c.s.MasterService] took [77ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:17 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [4], source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:17 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [4] 2022.11.10 18:36:18 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [4] with uuid [dCPGLwZXTUacTckCduj6bw], diff size [1050] 2022.11.10 18:36:18 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [628ms]; wrote global metadata [false] and metadata for [1] indices and skipped [0] unchanged indices 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=4}]: execute 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [4], source [Publication{term=1, version=4}] 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 4 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 4 2022.11.10 18:36:18 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:18 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101778756, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 4 2022.11.10 18:36:18 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=4}]: took [0s] done applying updated cluster state (version: 4, uuid: dCPGLwZXTUacTckCduj6bw) 2022.11.10 18:36:18 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=4} 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] took [4ms] to notify listeners on successful publication of cluster state (version: 4, uuid: dCPGLwZXTUacTckCduj6bw) for [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [gG0gsDNPQh6z6gRo9DUdNA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] took [128ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:19 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:20 DEBUG es[][o.e.c.s.MasterService] took [290ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:21 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [metadatas/Oth1bCT6T3iI8grVY9VGqw][metadata]] 2022.11.10 18:36:22 DEBUG es[][o.e.c.m.MetadataMappingService] [metadatas/Oth1bCT6T3iI8grVY9VGqw] create_mapping [metadata] with source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}] 2022.11.10 18:36:22 DEBUG es[][o.e.c.s.MasterService] took [753ms] to compute cluster state update for [put-mapping [metadatas/Oth1bCT6T3iI8grVY9VGqw][metadata]] 2022.11.10 18:36:22 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [5], source [put-mapping [metadatas/Oth1bCT6T3iI8grVY9VGqw][metadata]] 2022.11.10 18:36:22 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [5] 2022.11.10 18:36:22 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [5] with uuid [LizzNHVoSjmUsWQRQhfntQ], diff size [1178] 2022.11.10 18:36:23 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [260ms]; wrote global metadata [false] and metadata for [1] indices and skipped [0] unchanged indices 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=5}]: execute 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [5], source [Publication{term=1, version=5}] 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 5 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 5 2022.11.10 18:36:23 DEBUG es[][o.e.i.m.MapperService] [[metadatas/Oth1bCT6T3iI8grVY9VGqw]] added mapping [metadata], source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}] 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 5 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=5}]: took [202ms] done applying updated cluster state (version: 5, uuid: LizzNHVoSjmUsWQRQhfntQ) 2022.11.10 18:36:23 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=5} 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 5, uuid: LizzNHVoSjmUsWQRQhfntQ) for [put-mapping [metadatas/Oth1bCT6T3iI8grVY9VGqw][metadata]] 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:23 DEBUG es[][o.e.c.s.MasterService] took [89ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:28 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][young][289][11] duration [566ms], collections [1]/[1s], total [566ms]/[2.9s], memory [98.7mb]->[57mb]/[512mb], all_pools {[young] [47mb]->[0b]/[0b]}{[old] [47.7mb]->[51mb]/[512mb]}{[survivor] [4mb]->[6mb]/[0b]} 2022.11.10 18:36:28 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][289] overhead, spent [566ms] collecting in the last [1s] 2022.11.10 18:36:30 DEBUG es[][r.suppressed] path: /components, params: {index=components} org.elasticsearch.index.IndexNotFoundException: no such index [components] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:36:35 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [components], cause [api]] 2022.11.10 18:36:35 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:36:35 DEBUG es[][o.e.i.IndicesService] creating Index [[components/9Xew6hvQQhe2UCil8G2lug]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:36:35 INFO es[][o.e.c.m.MetadataCreateIndexService] [components] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 18:36:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:36:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:35 DEBUG es[][o.e.c.s.MasterService] took [708ms] to compute cluster state update for [create-index [components], cause [api]] 2022.11.10 18:36:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [6], source [create-index [components], cause [api]] 2022.11.10 18:36:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [6] 2022.11.10 18:36:36 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [6] with uuid [LERbV0HsTQ-PaOGq0tC57g], diff size [1166] 2022.11.10 18:36:36 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [412ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=6}]: execute 2022.11.10 18:36:36 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [6], source [Publication{term=1, version=6}] 2022.11.10 18:36:36 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 6 2022.11.10 18:36:36 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 6 2022.11.10 18:36:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[components/9Xew6hvQQhe2UCil8G2lug]] creating index 2022.11.10 18:36:36 DEBUG es[][o.e.i.IndicesService] creating Index [[components/9Xew6hvQQhe2UCil8G2lug]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:36:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][2] creating shard with primary term [1] 2022.11.10 18:36:36 DEBUG es[][o.e.i.IndexService] [components][2] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/9Xew6hvQQhe2UCil8G2lug/2, shard=[components][2]}] 2022.11.10 18:36:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][2] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][3] creating shard with primary term [1] 2022.11.10 18:36:36 DEBUG es[][o.e.i.IndexService] [components][3] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/9Xew6hvQQhe2UCil8G2lug/3, shard=[components][3]}] 2022.11.10 18:36:36 DEBUG es[][o.e.i.IndexService] creating shard_id [components][3] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][1] creating shard with primary term [1] 2022.11.10 18:36:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:37 DEBUG es[][o.e.i.IndexService] [components][1] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/9Xew6hvQQhe2UCil8G2lug/1, shard=[components][1]}] 2022.11.10 18:36:37 DEBUG es[][o.e.i.IndexService] creating shard_id [components][1] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:37 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][0] creating shard with primary term [1] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:37 DEBUG es[][o.e.i.IndexService] [components][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/9Xew6hvQQhe2UCil8G2lug/0, shard=[components][0]}] 2022.11.10 18:36:37 DEBUG es[][o.e.i.IndexService] creating shard_id [components][0] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:37 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 6 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=6}]: took [1.6s] done applying updated cluster state (version: 6, uuid: LERbV0HsTQ-PaOGq0tC57g) 2022.11.10 18:36:38 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=6} 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 6, uuid: LERbV0HsTQ-PaOGq0tC57g) for [create-index [components], cause [api]] 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fTBQFRL6Qn-NaCcqhNycvQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=6_7Labn-TKCWsQbGO_qzSA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=fTBQFRL6Qn-NaCcqhNycvQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=6_7Labn-TKCWsQbGO_qzSA}]}] 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.7s] 2022.11.10 18:36:38 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:38 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:38 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] starting shard [components][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=-VK0FnMDTjaEUljy3ggIEQ], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:35.786Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [7], source [shard-started StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:38 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [7] 2022.11.10 18:36:38 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [7] with uuid [9XAQRWP6RyiQUcpaZjPl8A], diff size [1169] 2022.11.10 18:36:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=XQevKgR1Tv27ECMeqQkSsQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=G9M-DEoVScuAqLrLFIZh3Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=XQevKgR1Tv27ECMeqQkSsQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=G9M-DEoVScuAqLrLFIZh3Q}]}] 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:38 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.6s] 2022.11.10 18:36:38 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:38 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:39 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:39 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:39 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=xonhoB_oRRmOxcnk8wNnxg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q3e0gCHyR9eomc2xUOR9oA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=xonhoB_oRRmOxcnk8wNnxg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Q3e0gCHyR9eomc2xUOR9oA}]}] 2022.11.10 18:36:39 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:39 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [2.5s] 2022.11.10 18:36:39 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:39 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:39 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=mVnrc-PLSXOtN_Mmni9nRw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xqvvGK-GTRasKYq4ma8wEw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=mVnrc-PLSXOtN_Mmni9nRw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=xqvvGK-GTRasKYq4ma8wEw}]}] 2022.11.10 18:36:40 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1861ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=7}]: execute 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [7], source [Publication{term=1, version=7}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 7 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 7 2022.11.10 18:36:40 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:40 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [3.2s] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:40 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 7 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=7}]: took [250ms] done applying updated cluster state (version: 7, uuid: 9XAQRWP6RyiQUcpaZjPl8A) 2022.11.10 18:36:40 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=7} 2022.11.10 18:36:40 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 7, uuid: 9XAQRWP6RyiQUcpaZjPl8A) for [shard-started StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][2]], allocationId [-VK0FnMDTjaEUljy3ggIEQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:41 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:41 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] starting shard [components][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=oLn8WP7oQ8mWkvafYlPtsg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:35.786Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:41 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] starting shard [components][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=5f_0Y8CwS4u--DZodFMhbQ], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:35.786Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:41 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] starting shard [components][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=YFv3sAwTRmWCItoFhja8Eg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:35.786Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:41 DEBUG es[][o.e.c.s.MasterService] took [644ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:41 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [8], source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:41 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [8] 2022.11.10 18:36:41 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=2, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=4, timestamp=1668101801781, source='peer recovery'}}}] 2022.11.10 18:36:42 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [8] with uuid [vtHQx2y5QQyllqh7g99IVQ], diff size [1168] 2022.11.10 18:36:42 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][young][302][12] duration [626ms], collections [1]/[1.5s], total [626ms]/[3.5s], memory [75mb]->[57.5mb]/[512mb], all_pools {[young] [18mb]->[0b]/[0b]}{[old] [51mb]->[56.5mb]/[512mb]}{[survivor] [6mb]->[1mb]/[0b]} 2022.11.10 18:36:42 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][302] overhead, spent [626ms] collecting in the last [1.5s] 2022.11.10 18:36:43 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1788ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:43 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=8}]: execute 2022.11.10 18:36:43 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [8], source [Publication{term=1, version=8}] 2022.11.10 18:36:43 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 8 2022.11.10 18:36:43 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 8 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:43 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:43 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 8 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=8}]: took [210ms] done applying updated cluster state (version: 8, uuid: vtHQx2y5QQyllqh7g99IVQ) 2022.11.10 18:36:44 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=8} 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 8, uuid: vtHQx2y5QQyllqh7g99IVQ) for [shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][1]], allocationId [YFv3sAwTRmWCItoFhja8Eg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [5f_0Y8CwS4u--DZodFMhbQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][3]], allocationId [oLn8WP7oQ8mWkvafYlPtsg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [9], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [9] 2022.11.10 18:36:44 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [9] with uuid [KRGUpUlrQwmLXL_1jlVovQ], diff size [1171] 2022.11.10 18:36:44 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [424ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=9}]: execute 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [9], source [Publication{term=1, version=9}] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 9 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 9 2022.11.10 18:36:44 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][4] creating shard with primary term [1] 2022.11.10 18:36:44 DEBUG es[][o.e.i.IndexService] [components][4] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/9Xew6hvQQhe2UCil8G2lug/4, shard=[components][4]}] 2022.11.10 18:36:44 DEBUG es[][o.e.i.IndexService] creating shard_id [components][4] 2022.11.10 18:36:44 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:36:44 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:36:44 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 9 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=9}]: took [218ms] done applying updated cluster state (version: 9, uuid: KRGUpUlrQwmLXL_1jlVovQ) 2022.11.10 18:36:44 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=9} 2022.11.10 18:36:44 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:36:44 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 9, uuid: KRGUpUlrQwmLXL_1jlVovQ) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:45 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:45 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:36:45 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qIcHB1PQQ7GlRGeqjPLLjg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=y75mJQ_yQy255SAJ_WX-4Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=qIcHB1PQQ7GlRGeqjPLLjg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=y75mJQ_yQy255SAJ_WX-4Q}]}] 2022.11.10 18:36:45 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:36:45 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.3s] 2022.11.10 18:36:45 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:45 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:36:45 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:46 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] starting shard [components][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=n-PxEEq7RA6Oy8ULFDn1YA], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:35.786Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:36:46 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[components][4]]]). 2022.11.10 18:36:46 DEBUG es[][o.e.c.s.MasterService] took [99ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:46 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [10], source [shard-started StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:46 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [10] 2022.11.10 18:36:50 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [10] with uuid [RVZjldgHTqyjk-BXSci68w], diff size [1157] 2022.11.10 18:36:52 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1473ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=10}]: execute 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [10], source [Publication{term=1, version=10}] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 10 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 10 2022.11.10 18:36:52 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:36:52 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 10 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=10}]: took [200ms] done applying updated cluster state (version: 10, uuid: RVZjldgHTqyjk-BXSci68w) 2022.11.10 18:36:52 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=10} 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 10, uuid: RVZjldgHTqyjk-BXSci68w) for [shard-started StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[components][4]], allocationId [n-PxEEq7RA6Oy8ULFDn1YA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] took [17ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:52 DEBUG es[][o.e.c.s.MasterService] took [6ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:53 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [components/9Xew6hvQQhe2UCil8G2lug][auth]] 2022.11.10 18:36:54 DEBUG es[][o.e.c.m.MetadataMappingService] [components/9Xew6hvQQhe2UCil8G2lug] create_mapping [auth] with source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"name":{"type":"text","store":true,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"term_vector":"with_positions_offsets","norms":false,"fielddata":true},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] took [961ms] to compute cluster state update for [put-mapping [components/9Xew6hvQQhe2UCil8G2lug][auth]] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [11], source [put-mapping [components/9Xew6hvQQhe2UCil8G2lug][auth]] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [11] 2022.11.10 18:36:54 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [11] with uuid [nAR_kz4fTTyX9dsi3H7WLA], diff size [1556] 2022.11.10 18:36:54 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [1] unchanged indices 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=11}]: execute 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [11], source [Publication{term=1, version=11}] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 11 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 11 2022.11.10 18:36:54 DEBUG es[][o.e.i.m.MapperService] [[components/9Xew6hvQQhe2UCil8G2lug]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"name":{"type":"text","store":true,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"term_vector":"with_positions_offsets","norms":false,"fielddata":true},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 11 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=11}]: took [0s] done applying updated cluster state (version: 11, uuid: nAR_kz4fTTyX9dsi3H7WLA) 2022.11.10 18:36:54 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=11} 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 11, uuid: nAR_kz4fTTyX9dsi3H7WLA) for [put-mapping [components/9Xew6hvQQhe2UCil8G2lug][auth]] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:54 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:36:54 DEBUG es[][r.suppressed] path: /projectmeasures, params: {index=projectmeasures} org.elasticsearch.index.IndexNotFoundException: no such index [projectmeasures] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:36:59 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [projectmeasures], cause [api]] 2022.11.10 18:36:59 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:36:59 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/WuM0xXvfS_elTVNxiAG9XA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:36:59 INFO es[][o.e.c.m.MetadataCreateIndexService] [projectmeasures] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 18:36:59 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:59 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:36:59 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:36:59 DEBUG es[][o.e.c.s.MasterService] took [276ms] to compute cluster state update for [create-index [projectmeasures], cause [api]] 2022.11.10 18:36:59 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [12], source [create-index [projectmeasures], cause [api]] 2022.11.10 18:36:59 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [12] 2022.11.10 18:36:59 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [12] with uuid [WFNiHrgiRv2YRz1s5E9uaA], diff size [1175] 2022.11.10 18:37:00 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [255ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:00 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=12}]: execute 2022.11.10 18:37:00 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [12], source [Publication{term=1, version=12}] 2022.11.10 18:37:00 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 12 2022.11.10 18:37:00 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 12 2022.11.10 18:37:00 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[projectmeasures/WuM0xXvfS_elTVNxiAG9XA]] creating index 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/WuM0xXvfS_elTVNxiAG9XA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:00 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][2] creating shard with primary term [1] 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/WuM0xXvfS_elTVNxiAG9XA/2, shard=[projectmeasures][2]}] 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][2] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:00 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][3] creating shard with primary term [1] 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/WuM0xXvfS_elTVNxiAG9XA/3, shard=[projectmeasures][3]}] 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][3] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:00 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:00 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:00 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][1] creating shard with primary term [1] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/WuM0xXvfS_elTVNxiAG9XA/1, shard=[projectmeasures][1]}] 2022.11.10 18:37:00 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][1] 2022.11.10 18:37:00 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:00 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=eeSzvJENR6aHFva991frsw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=f-8-pYjkR-egkEb73GMQUw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=eeSzvJENR6aHFva991frsw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=f-8-pYjkR-egkEb73GMQUw}]}] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [775ms] 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:01 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][0] creating shard with primary term [1] 2022.11.10 18:37:01 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/WuM0xXvfS_elTVNxiAG9XA/0, shard=[projectmeasures][0]}] 2022.11.10 18:37:01 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][0] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:01 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=-RE1nWDbTjeQZHsAkGdtcg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=yIfVX2b_RcmabARRvGzLeA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=-RE1nWDbTjeQZHsAkGdtcg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=yIfVX2b_RcmabARRvGzLeA}]}] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [991ms] 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:01 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 12 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=12}]: took [1.5s] done applying updated cluster state (version: 12, uuid: WFNiHrgiRv2YRz1s5E9uaA) 2022.11.10 18:37:01 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=12} 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 12, uuid: WFNiHrgiRv2YRz1s5E9uaA) for [create-index [projectmeasures], cause [api]] 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] starting shard [projectmeasures][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=CIwaLPyTTeSnxvfsyhdOgw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:59.579Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:01 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] starting shard [projectmeasures][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=M3AOmI0HRUCJEbqGQOsdkQ], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:59.579Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.MasterService] took [5ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [13], source [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:01 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [13] 2022.11.10 18:37:01 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [13] with uuid [laZrNcn_Qjy3C7EA7cyHkg], diff size [1179] 2022.11.10 18:37:01 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:01 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:02 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=s3yk1HmvT6ysfE3X6Nr9lg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=DE7ae9bcR_Gplq-fMLuCvA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=s3yk1HmvT6ysfE3X6Nr9lg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=DE7ae9bcR_Gplq-fMLuCvA}]}] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.5s] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:02 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [640ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=13}]: execute 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [13], source [Publication{term=1, version=13}] 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 13 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 13 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 13 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=13}]: took [0s] done applying updated cluster state (version: 13, uuid: laZrNcn_Qjy3C7EA7cyHkg) 2022.11.10 18:37:02 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=13} 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 13, uuid: laZrNcn_Qjy3C7EA7cyHkg) for [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [M3AOmI0HRUCJEbqGQOsdkQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [CIwaLPyTTeSnxvfsyhdOgw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] starting shard [projectmeasures][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=23tmdgYBTdCPZP6N_xmAqA], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:59.579Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.MasterService] took [11ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [14], source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:02 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [14] 2022.11.10 18:37:02 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:02 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [14] with uuid [kxzvkWItS2mCRMaVw5XprA], diff size [1173] 2022.11.10 18:37:02 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:02 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=J0pPFX-bSgqZMGVb2eSf3w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=n1xxAcqwSyej0fYxCXrFtg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=J0pPFX-bSgqZMGVb2eSf3w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=n1xxAcqwSyej0fYxCXrFtg}]}] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:02 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.4s] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:02 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:02 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][319] overhead, spent [123ms] collecting in the last [1s] 2022.11.10 18:37:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [1186ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=14}]: execute 2022.11.10 18:37:03 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [14], source [Publication{term=1, version=14}] 2022.11.10 18:37:03 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 14 2022.11.10 18:37:03 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 14 2022.11.10 18:37:05 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:05 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:05 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:05 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 14 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=14}]: took [1.5s] done applying updated cluster state (version: 14, uuid: kxzvkWItS2mCRMaVw5XprA) 2022.11.10 18:37:05 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=14} 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 14, uuid: kxzvkWItS2mCRMaVw5XprA) for [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [23tmdgYBTdCPZP6N_xmAqA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:37:05 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] starting shard [projectmeasures][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=GMRWIDwGRF65UXw29B0Peg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:59.579Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [15], source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [15] 2022.11.10 18:37:05 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [15] with uuid [HIKu_y8TS8-98SkjAwWrxQ], diff size [1164] 2022.11.10 18:37:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=15}]: execute 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [15], source [Publication{term=1, version=15}] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 15 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 15 2022.11.10 18:37:05 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:05 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 15 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=15}]: took [0s] done applying updated cluster state (version: 15, uuid: HIKu_y8TS8-98SkjAwWrxQ) 2022.11.10 18:37:05 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=15} 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 15, uuid: HIKu_y8TS8-98SkjAwWrxQ) for [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [GMRWIDwGRF65UXw29B0Peg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [16], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:05 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [16] 2022.11.10 18:37:05 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [16] with uuid [Q1HPxozLTmWNlyF8-TyXNw], diff size [1183] 2022.11.10 18:37:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [229ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=16}]: execute 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [16], source [Publication{term=1, version=16}] 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 16 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 16 2022.11.10 18:37:06 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][4] creating shard with primary term [1] 2022.11.10 18:37:06 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/WuM0xXvfS_elTVNxiAG9XA/4, shard=[projectmeasures][4]}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][4] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 16 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=16}]: took [0s] done applying updated cluster state (version: 16, uuid: Q1HPxozLTmWNlyF8-TyXNw) 2022.11.10 18:37:06 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=16} 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 16, uuid: Q1HPxozLTmWNlyF8-TyXNw) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:06 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:06 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:06 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JqJUXNLzT9qeKhd2nuVZfg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=z347GvR1SJifBGzplyF1eA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=JqJUXNLzT9qeKhd2nuVZfg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=z347GvR1SJifBGzplyF1eA}]}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [600ms] 2022.11.10 18:37:06 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:06 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:06 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] starting shard [projectmeasures][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=4XL2dF06QsCgmLSMRlluMA], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:36:59.579Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:06 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[projectmeasures][4]]]). 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [17], source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [17] 2022.11.10 18:37:06 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [17] with uuid [qgQ4QBe-Ruifgzs05U_Hjw], diff size [1162] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:37:06 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [527ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=17}]: execute 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [17], source [Publication{term=1, version=17}] 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 17 2022.11.10 18:37:06 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 17 2022.11.10 18:37:06 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:07 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 17 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=17}]: took [0s] done applying updated cluster state (version: 17, uuid: qgQ4QBe-Ruifgzs05U_Hjw) 2022.11.10 18:37:07 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=17} 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 17, uuid: qgQ4QBe-Ruifgzs05U_Hjw) for [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [4XL2dF06QsCgmLSMRlluMA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] took [100ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:07 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [projectmeasures/WuM0xXvfS_elTVNxiAG9XA][auth]] 2022.11.10 18:37:08 DEBUG es[][o.e.c.m.MetadataMappingService] [projectmeasures/WuM0xXvfS_elTVNxiAG9XA] create_mapping [auth] with source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"qualifier":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.MasterService] took [468ms] to compute cluster state update for [put-mapping [projectmeasures/WuM0xXvfS_elTVNxiAG9XA][auth]] 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [18], source [put-mapping [projectmeasures/WuM0xXvfS_elTVNxiAG9XA][auth]] 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [18] 2022.11.10 18:37:08 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [18] with uuid [BG3ZSnLaT8GK8mVw29BSAQ], diff size [1582] 2022.11.10 18:37:08 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [520ms]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=18}]: execute 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [18], source [Publication{term=1, version=18}] 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 18 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 18 2022.11.10 18:37:08 DEBUG es[][o.e.i.m.MapperService] [[projectmeasures/WuM0xXvfS_elTVNxiAG9XA]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"qualifier":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 18 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=18}]: took [215ms] done applying updated cluster state (version: 18, uuid: BG3ZSnLaT8GK8mVw29BSAQ) 2022.11.10 18:37:08 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=18} 2022.11.10 18:37:08 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 18, uuid: BG3ZSnLaT8GK8mVw29BSAQ) for [put-mapping [projectmeasures/WuM0xXvfS_elTVNxiAG9XA][auth]] 2022.11.10 18:37:09 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:09 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:09 DEBUG es[][o.e.c.s.MasterService] took [25ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:09 DEBUG es[][r.suppressed] path: /rules, params: {index=rules} org.elasticsearch.index.IndexNotFoundException: no such index [rules] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:37:11 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [rules], cause [api]] 2022.11.10 18:37:11 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:37:11 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/XKhsuPkESGGZvj9wjlKlBg]], shards [2]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:11 INFO es[][o.e.c.m.MetadataCreateIndexService] [rules] creating index, cause [api], templates [], shards [2]/[0] 2022.11.10 18:37:11 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:11 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:37:11 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:11 DEBUG es[][o.e.c.s.MasterService] took [380ms] to compute cluster state update for [create-index [rules], cause [api]] 2022.11.10 18:37:11 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [19], source [create-index [rules], cause [api]] 2022.11.10 18:37:11 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [19] 2022.11.10 18:37:11 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [19] with uuid [OkLD83q1TdKVkZMujn2VJg], diff size [1096] 2022.11.10 18:37:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=3, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=10, timestamp=1668101831892, source='peer recovery'}}}] 2022.11.10 18:37:12 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [313ms]; wrote global metadata [false] and metadata for [1] indices and skipped [3] unchanged indices 2022.11.10 18:37:12 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=19}]: execute 2022.11.10 18:37:12 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [19], source [Publication{term=1, version=19}] 2022.11.10 18:37:12 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 19 2022.11.10 18:37:12 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 19 2022.11.10 18:37:12 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[rules/XKhsuPkESGGZvj9wjlKlBg]] creating index 2022.11.10 18:37:12 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/XKhsuPkESGGZvj9wjlKlBg]], shards [2]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:12 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][1] creating shard with primary term [1] 2022.11.10 18:37:12 DEBUG es[][o.e.i.IndexService] [rules][1] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/XKhsuPkESGGZvj9wjlKlBg/1, shard=[rules][1]}] 2022.11.10 18:37:12 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][1] 2022.11.10 18:37:12 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:12 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:13 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][0] creating shard with primary term [1] 2022.11.10 18:37:13 DEBUG es[][o.e.i.IndexService] [rules][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/XKhsuPkESGGZvj9wjlKlBg/0, shard=[rules][0]}] 2022.11.10 18:37:13 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][0] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:13 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 19 2022.11.10 18:37:13 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=19}]: took [1.1s] done applying updated cluster state (version: 19, uuid: OkLD83q1TdKVkZMujn2VJg) 2022.11.10 18:37:13 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=19} 2022.11.10 18:37:13 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 19, uuid: OkLD83q1TdKVkZMujn2VJg) for [create-index [rules], cause [api]] 2022.11.10 18:37:13 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:13 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:13 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:13 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hCFHK9IFQ12pd9-_JYcffg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=cVifa_TFQ4SrUqkHSpk6ww}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hCFHK9IFQ12pd9-_JYcffg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=cVifa_TFQ4SrUqkHSpk6ww}]}] 2022.11.10 18:37:13 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:13 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [921ms] 2022.11.10 18:37:13 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:13 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:13 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:13 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] starting shard [rules][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=3rFVn1gTRFapcNOJ52u-8w], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:11.729Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] took [112ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [20], source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [20] 2022.11.10 18:37:14 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [20] with uuid [-RoLrI5pQ9Ge7e1ri4prBw], diff size [1102] 2022.11.10 18:37:14 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=B2zJgVIcQruDAYkj81hung, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=QcV-qxo9TiOBWdpP_ld1Zg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=B2zJgVIcQruDAYkj81hung, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=QcV-qxo9TiOBWdpP_ld1Zg}]}] 2022.11.10 18:37:14 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:14 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.6s] 2022.11.10 18:37:14 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:14 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:14 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [401ms]; wrote global metadata [false] and metadata for [1] indices and skipped [3] unchanged indices 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=20}]: execute 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [20], source [Publication{term=1, version=20}] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 20 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 20 2022.11.10 18:37:14 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:14 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:14 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:14 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 20 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=20}]: took [0s] done applying updated cluster state (version: 20, uuid: -RoLrI5pQ9Ge7e1ri4prBw) 2022.11.10 18:37:14 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=20} 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 20, uuid: -RoLrI5pQ9Ge7e1ri4prBw) for [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][0]], allocationId [3rFVn1gTRFapcNOJ52u-8w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] starting shard [rules][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=xIC7oVNyTvWRavizNfvThQ], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:11.729Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:14 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[rules][1]]]). 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] took [171ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [21], source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:14 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [21] 2022.11.10 18:37:14 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [21] with uuid [zvRYSZvJToaNjriaDIYaDA], diff size [1078] 2022.11.10 18:37:15 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [402ms]; wrote global metadata [false] and metadata for [1] indices and skipped [3] unchanged indices 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=21}]: execute 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [21], source [Publication{term=1, version=21}] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 21 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 21 2022.11.10 18:37:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:15 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 21 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=21}]: took [203ms] done applying updated cluster state (version: 21, uuid: zvRYSZvJToaNjriaDIYaDA) 2022.11.10 18:37:15 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=21} 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 21, uuid: zvRYSZvJToaNjriaDIYaDA) for [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[rules][1]], allocationId [xIC7oVNyTvWRavizNfvThQ], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] took [91ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:15 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:37:15 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [rules/XKhsuPkESGGZvj9wjlKlBg][rule]] 2022.11.10 18:37:16 DEBUG es[][o.e.c.m.MetadataMappingService] [rules/XKhsuPkESGGZvj9wjlKlBg] create_mapping [rule] with source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"activeRule_uuid":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":"activeRule"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"owaspTop10-2021":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"tags":{"type":"keyword","norms":true},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}] 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.MasterService] took [572ms] to compute cluster state update for [put-mapping [rules/XKhsuPkESGGZvj9wjlKlBg][rule]] 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [22], source [put-mapping [rules/XKhsuPkESGGZvj9wjlKlBg][rule]] 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [22] 2022.11.10 18:37:16 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [22] with uuid [TXHeEe83R7ei3JnkbJQbew], diff size [1633] 2022.11.10 18:37:16 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [400ms]; wrote global metadata [false] and metadata for [1] indices and skipped [3] unchanged indices 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=22}]: execute 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [22], source [Publication{term=1, version=22}] 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 22 2022.11.10 18:37:16 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 22 2022.11.10 18:37:16 DEBUG es[][o.e.i.m.MapperService] [[rules/XKhsuPkESGGZvj9wjlKlBg]] added mapping [rule], source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"activeRule_uuid":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":"activeRule"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"owaspTop10-2021":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"tags":{"type":"keyword","norms":true},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}] 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 22 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=22}]: took [232ms] done applying updated cluster state (version: 22, uuid: TXHeEe83R7ei3JnkbJQbew) 2022.11.10 18:37:17 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=22} 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 22, uuid: TXHeEe83R7ei3JnkbJQbew) for [put-mapping [rules/XKhsuPkESGGZvj9wjlKlBg][rule]] 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:17 DEBUG es[][o.e.c.s.MasterService] took [13ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:17 DEBUG es[][r.suppressed] path: /issues, params: {index=issues} org.elasticsearch.index.IndexNotFoundException: no such index [issues] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:37:21 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [issues], cause [api]] 2022.11.10 18:37:21 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:37:21 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/xb_5ZqTkQNC1ZOcDunuYdA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:21 INFO es[][o.e.c.m.MetadataCreateIndexService] [issues] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 18:37:21 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:21 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:37:21 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:21 DEBUG es[][o.e.c.s.MasterService] took [216ms] to compute cluster state update for [create-index [issues], cause [api]] 2022.11.10 18:37:21 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [23], source [create-index [issues], cause [api]] 2022.11.10 18:37:21 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [23] 2022.11.10 18:37:21 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [23] with uuid [FxjyCDegRL6GHreshgLNPw], diff size [1160] 2022.11.10 18:37:22 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [472ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:22 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=23}]: execute 2022.11.10 18:37:22 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [23], source [Publication{term=1, version=23}] 2022.11.10 18:37:22 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 23 2022.11.10 18:37:22 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 23 2022.11.10 18:37:22 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[issues/xb_5ZqTkQNC1ZOcDunuYdA]] creating index 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/xb_5ZqTkQNC1ZOcDunuYdA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:22 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][2] creating shard with primary term [1] 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] [issues][2] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/xb_5ZqTkQNC1ZOcDunuYdA/2, shard=[issues][2]}] 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][2] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:22 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][3] creating shard with primary term [1] 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] [issues][3] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/xb_5ZqTkQNC1ZOcDunuYdA/3, shard=[issues][3]}] 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][3] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:22 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][1] creating shard with primary term [1] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] [issues][1] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/xb_5ZqTkQNC1ZOcDunuYdA/1, shard=[issues][1]}] 2022.11.10 18:37:22 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][1] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:22 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:22 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][0] creating shard with primary term [1] 2022.11.10 18:37:23 DEBUG es[][o.e.i.IndexService] [issues][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/xb_5ZqTkQNC1ZOcDunuYdA/0, shard=[issues][0]}] 2022.11.10 18:37:23 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][0] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 23 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=23}]: took [976ms] done applying updated cluster state (version: 23, uuid: FxjyCDegRL6GHreshgLNPw) 2022.11.10 18:37:23 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=23} 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 23, uuid: FxjyCDegRL6GHreshgLNPw) for [create-index [issues], cause [api]] 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2YSI3iErRpey56w3q9SoMg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Yp-TatzjR96SoQoMz1H4aQ}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=2YSI3iErRpey56w3q9SoMg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Yp-TatzjR96SoQoMz1H4aQ}]}] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:23 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.1s] 2022.11.10 18:37:23 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:23 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:23 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] starting shard [issues][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=U-u1moysTMWdE8rWynhcIg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:21.826Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.MasterService] took [83ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [24], source [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:23 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [24] 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [24] with uuid [wTePdjfcQJuSXkANoGS5mw], diff size [1164] 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:23 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=7HVBjB-VTo2G_5BQ1RYpyg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=poVA3G_vQaWCfyFcxxMfjg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=7HVBjB-VTo2G_5BQ1RYpyg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=poVA3G_vQaWCfyFcxxMfjg}]}] 2022.11.10 18:37:23 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=CIPhcgJyRoyY6WrKQVW77w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Ohe0WEdsSRyIS0n42vI4Ng}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=CIPhcgJyRoyY6WrKQVW77w, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Ohe0WEdsSRyIS0n42vI4Ng}]}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.1s] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.5s] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=VbK4u4oETDOFKE-Yd2FqJg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=cnGnSBCKSYWaTA_FtcHQsg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=VbK4u4oETDOFKE-Yd2FqJg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=cnGnSBCKSYWaTA_FtcHQsg}]}] 2022.11.10 18:37:24 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [434ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=24}]: execute 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.3s] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [24], source [Publication{term=1, version=24}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 24 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 24 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 24 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=24}]: took [225ms] done applying updated cluster state (version: 24, uuid: wTePdjfcQJuSXkANoGS5mw) 2022.11.10 18:37:24 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=24} 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 24, uuid: wTePdjfcQJuSXkANoGS5mw) for [shard-started StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][2]], allocationId [U-u1moysTMWdE8rWynhcIg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] starting shard [issues][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=2VHVgbOaSKWm7joV3eutMw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:21.826Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] starting shard [issues][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=8bbUZazFRPGD9Vj26BepIw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:21.826Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:24 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] starting shard [issues][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=JyyNBo-uQliaUZe3cnVyYQ], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:21.826Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] took [9ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [25], source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [25] 2022.11.10 18:37:24 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [25] with uuid [vcq1xDcDSf2h59wUeOyZXQ], diff size [1165] 2022.11.10 18:37:24 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [451ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=25}]: execute 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [25], source [Publication{term=1, version=25}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 25 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 25 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:24 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 25 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=25}]: took [240ms] done applying updated cluster state (version: 25, uuid: vcq1xDcDSf2h59wUeOyZXQ) 2022.11.10 18:37:24 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=25} 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 25, uuid: vcq1xDcDSf2h59wUeOyZXQ) for [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][1]], allocationId [JyyNBo-uQliaUZe3cnVyYQ], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [2VHVgbOaSKWm7joV3eutMw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][0]], allocationId [8bbUZazFRPGD9Vj26BepIw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] took [20ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [26], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:24 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [26] 2022.11.10 18:37:24 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [26] with uuid [_N-_fbZ1RgGalET5r6K4bQ], diff size [1168] 2022.11.10 18:37:25 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [202ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=26}]: execute 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [26], source [Publication{term=1, version=26}] 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 26 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 26 2022.11.10 18:37:25 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][4] creating shard with primary term [1] 2022.11.10 18:37:25 DEBUG es[][o.e.i.IndexService] [issues][4] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/xb_5ZqTkQNC1ZOcDunuYdA/4, shard=[issues][4]}] 2022.11.10 18:37:25 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][4] 2022.11.10 18:37:25 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:25 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:25 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:25 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 26 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=26}]: took [298ms] done applying updated cluster state (version: 26, uuid: _N-_fbZ1RgGalET5r6K4bQ) 2022.11.10 18:37:25 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=26} 2022.11.10 18:37:25 DEBUG es[][o.e.c.s.MasterService] took [21ms] to notify listeners on successful publication of cluster state (version: 26, uuid: _N-_fbZ1RgGalET5r6K4bQ) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:26 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:26 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:26 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=-KEbBH26QLCHT-mmOwyetQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=wTqifRoZRpK2knrrTaUT8Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=-KEbBH26QLCHT-mmOwyetQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=wTqifRoZRpK2knrrTaUT8Q}]}] 2022.11.10 18:37:26 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:26 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [823ms] 2022.11.10 18:37:26 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:26 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:26 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] starting shard [issues][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=_wrfmdAEQz-C-3ke5xlvQw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:21.826Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:26 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[issues][4]]]). 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [46ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [27], source [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [27] 2022.11.10 18:37:26 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [27] with uuid [kI0LJil7S0i-aIdN-FK0_A], diff size [1150] 2022.11.10 18:37:26 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [210ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=27}]: execute 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [27], source [Publication{term=1, version=27}] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 27 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 27 2022.11.10 18:37:26 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:26 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 27 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=27}]: took [241ms] done applying updated cluster state (version: 27, uuid: kI0LJil7S0i-aIdN-FK0_A) 2022.11.10 18:37:26 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=27} 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 27, uuid: kI0LJil7S0i-aIdN-FK0_A) for [shard-started StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[issues][4]], allocationId [_wrfmdAEQz-C-3ke5xlvQw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:26 DEBUG es[][o.e.c.s.MasterService] took [56ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [issues/xb_5ZqTkQNC1ZOcDunuYdA][auth]] 2022.11.10 18:37:27 DEBUG es[][o.e.c.m.MetadataMappingService] [issues/xb_5ZqTkQNC1ZOcDunuYdA] create_mapping [auth] with source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"assignee":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"authorLogin":{"type":"keyword"},"branch":{"type":"keyword"},"component":{"type":"keyword"},"cwe":{"type":"keyword"},"dirPath":{"type":"keyword"},"effort":{"type":"long"},"filePath":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"indexType":{"type":"keyword","doc_values":false},"isMainBranch":{"type":"boolean"},"isNewCodeReference":{"type":"boolean"},"issueClosedAt":{"type":"date","format":"date_time||epoch_second"},"issueCreatedAt":{"type":"date","format":"date_time||epoch_second"},"issueUpdatedAt":{"type":"date","format":"date_time||epoch_second"},"join_issues":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"issue"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"language":{"type":"keyword"},"line":{"type":"integer"},"modulePath":{"type":"text","analyzer":"uuid_module_analyzer"},"owaspAsvs-4":{"properties":{"0":{"type":"keyword"},"0-level":{"type":"keyword"}}},"owaspTop10":{"type":"keyword"},"owaspTop10-2021":{"type":"keyword"},"pciDss-3":{"properties":{"2":{"type":"keyword"}}},"pciDss-4":{"properties":{"0":{"type":"keyword"}}},"project":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"resolution":{"type":"keyword"},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"scope":{"type":"keyword"},"severity":{"type":"keyword"},"severityValue":{"type":"byte"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"tags":{"type":"keyword"},"type":{"type":"keyword"},"vulnerabilityProbability":{"type":"keyword"}}}}] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] took [175ms] to compute cluster state update for [put-mapping [issues/xb_5ZqTkQNC1ZOcDunuYdA][auth]] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [28], source [put-mapping [issues/xb_5ZqTkQNC1ZOcDunuYdA][auth]] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [28] 2022.11.10 18:37:27 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [28] with uuid [doFsDqNeTRuiNBURiNGlLg], diff size [1760] 2022.11.10 18:37:27 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [4] unchanged indices 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=28}]: execute 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [28], source [Publication{term=1, version=28}] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 28 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 28 2022.11.10 18:37:27 DEBUG es[][o.e.i.m.MapperService] [[issues/xb_5ZqTkQNC1ZOcDunuYdA]] added mapping [auth] (source suppressed due to length, use TRACE level if needed) 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 28 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=28}]: took [0s] done applying updated cluster state (version: 28, uuid: doFsDqNeTRuiNBURiNGlLg) 2022.11.10 18:37:27 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=28} 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 28, uuid: doFsDqNeTRuiNBURiNGlLg) for [put-mapping [issues/xb_5ZqTkQNC1ZOcDunuYdA][auth]] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:27 DEBUG es[][o.e.c.s.MasterService] took [3ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:28 DEBUG es[][r.suppressed] path: /users, params: {index=users} org.elasticsearch.index.IndexNotFoundException: no such index [users] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:37:29 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [users], cause [api]] 2022.11.10 18:37:29 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:37:29 DEBUG es[][o.e.i.IndicesService] creating Index [[users/W4A_SYBsRYuRAW9zSwtpuw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:29 INFO es[][o.e.c.m.MetadataCreateIndexService] [users] creating index, cause [api], templates [], shards [1]/[0] 2022.11.10 18:37:29 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:29 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:37:29 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:29 DEBUG es[][o.e.c.s.MasterService] took [119ms] to compute cluster state update for [create-index [users], cause [api]] 2022.11.10 18:37:29 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [29], source [create-index [users], cause [api]] 2022.11.10 18:37:29 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [29] 2022.11.10 18:37:29 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [29] with uuid [xUuA4DTHSmebI6YKxEyI5Q], diff size [1069] 2022.11.10 18:37:30 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [404ms]; wrote global metadata [false] and metadata for [1] indices and skipped [5] unchanged indices 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=29}]: execute 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [29], source [Publication{term=1, version=29}] 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 29 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 29 2022.11.10 18:37:30 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/W4A_SYBsRYuRAW9zSwtpuw]] creating index 2022.11.10 18:37:30 DEBUG es[][o.e.i.IndicesService] creating Index [[users/W4A_SYBsRYuRAW9zSwtpuw]], shards [1]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.c.IndicesClusterStateService] [users][0] creating shard with primary term [1] 2022.11.10 18:37:30 DEBUG es[][o.e.i.IndexService] [users][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/W4A_SYBsRYuRAW9zSwtpuw/0, shard=[users][0]}] 2022.11.10 18:37:30 DEBUG es[][o.e.i.IndexService] creating shard_id [users][0] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 29 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=29}]: took [654ms] done applying updated cluster state (version: 29, uuid: xUuA4DTHSmebI6YKxEyI5Q) 2022.11.10 18:37:30 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=29} 2022.11.10 18:37:30 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 29, uuid: xUuA4DTHSmebI6YKxEyI5Q) for [create-index [users], cause [api]] 2022.11.10 18:37:30 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:31 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:31 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:31 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=yvqdo3nJS-2JqQkxaBuqVg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=z4-AJGqfTomnjP--GF4wXw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=yvqdo3nJS-2JqQkxaBuqVg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=z4-AJGqfTomnjP--GF4wXw}]}] 2022.11.10 18:37:31 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:31 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [1.6s] 2022.11.10 18:37:31 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:31 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] received shard started for [StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:31 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:31 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] starting shard [users][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=3OfKKHcmRzOJw_aYDyFr4w], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:29.875Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:31 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[users][0]]]). 2022.11.10 18:37:31 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:31 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [30], source [shard-started StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:31 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [30] 2022.11.10 18:37:31 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [30] with uuid [L27yKw_9Qeul-9hwGN7H9g], diff size [1055] 2022.11.10 18:37:32 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [210ms]; wrote global metadata [false] and metadata for [1] indices and skipped [5] unchanged indices 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=30}]: execute 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [30], source [Publication{term=1, version=30}] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 30 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 30 2022.11.10 18:37:32 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:32 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 30 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=30}]: took [0s] done applying updated cluster state (version: 30, uuid: L27yKw_9Qeul-9hwGN7H9g) 2022.11.10 18:37:32 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=30} 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 30, uuid: L27yKw_9Qeul-9hwGN7H9g) for [shard-started StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[users][0]], allocationId [3OfKKHcmRzOJw_aYDyFr4w], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:32 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [users/W4A_SYBsRYuRAW9zSwtpuw][user]] 2022.11.10 18:37:33 DEBUG es[][o.e.c.m.MetadataMappingService] [users/W4A_SYBsRYuRAW9zSwtpuw] create_mapping [user] with source [{"user":{"dynamic":"false","properties":{"active":{"type":"boolean"},"email":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true},"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"login":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"name":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"scmAccounts":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.MasterService] took [208ms] to compute cluster state update for [put-mapping [users/W4A_SYBsRYuRAW9zSwtpuw][user]] 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [31], source [put-mapping [users/W4A_SYBsRYuRAW9zSwtpuw][user]] 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [31] 2022.11.10 18:37:33 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [31] with uuid [wbRRrBlVQSm_bDtN9j-vDg], diff size [1320] 2022.11.10 18:37:33 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [423ms]; wrote global metadata [false] and metadata for [1] indices and skipped [5] unchanged indices 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=31}]: execute 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [31], source [Publication{term=1, version=31}] 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 31 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 31 2022.11.10 18:37:33 DEBUG es[][o.e.i.m.MapperService] [[users/W4A_SYBsRYuRAW9zSwtpuw]] added mapping [user], source [{"user":{"dynamic":"false","properties":{"active":{"type":"boolean"},"email":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true},"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"login":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"name":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"scmAccounts":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 31 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=31}]: took [200ms] done applying updated cluster state (version: 31, uuid: wbRRrBlVQSm_bDtN9j-vDg) 2022.11.10 18:37:33 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=31} 2022.11.10 18:37:33 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 31, uuid: wbRRrBlVQSm_bDtN9j-vDg) for [put-mapping [users/W4A_SYBsRYuRAW9zSwtpuw][user]] 2022.11.10 18:37:34 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][349] overhead, spent [211ms] collecting in the last [1.1s] 2022.11.10 18:37:34 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:34 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:34 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:34 DEBUG es[][r.suppressed] path: /views, params: {index=views} org.elasticsearch.index.IndexNotFoundException: no such index [views] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:1250) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:1188) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:1144) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:292) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:270) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:92) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:53) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.info.TransportClusterInfoAction.checkBlock(TransportClusterInfoAction.java:24) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.checkBlockIfStateRecovered(TransportMasterNodeAction.java:138) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.access$000(TransportMasterNodeAction.java:52) ~[elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:185) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:158) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:52) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:154) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:82) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:95) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.RestCancellableNodeClient.doExecute(RestCancellableNodeClient.java:81) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1303) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1399) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$1(RestGetIndicesAction.java:86) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:109) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:327) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:393) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:245) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:382) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:461) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:357) [elasticsearch-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:35) [transport-netty4-client-7.17.5.jar:7.17.5] at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:19) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:48) [transport-netty4-client-7.17.5.jar:7.17.5] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final] at java.lang.Thread.run(Thread.java:829) [?:?] 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [create-index [views], cause [api]] 2022.11.10 18:37:35 DEBUG es[][o.e.c.m.MetadataCreateIndexService] applying create index request using legacy templates [] 2022.11.10 18:37:35 DEBUG es[][o.e.i.IndicesService] creating Index [[views/DeKrbCizSj2NfkoADmnwgA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:35 INFO es[][o.e.c.m.MetadataCreateIndexService] [views] creating index, cause [api], templates [], shards [5]/[0] 2022.11.10 18:37:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:35 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:37:35 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.MasterService] took [461ms] to compute cluster state update for [create-index [views], cause [api]] 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [32], source [create-index [views], cause [api]] 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [32] 2022.11.10 18:37:35 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [32] with uuid [HW6jropoRTahpDB0jAehXw], diff size [1166] 2022.11.10 18:37:35 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [457ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=32}]: execute 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [32], source [Publication{term=1, version=32}] 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 32 2022.11.10 18:37:35 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 32 2022.11.10 18:37:35 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[views/DeKrbCizSj2NfkoADmnwgA]] creating index 2022.11.10 18:37:35 DEBUG es[][o.e.i.IndicesService] creating Index [[views/DeKrbCizSj2NfkoADmnwgA]], shards [5]/[0] - reason [CREATE_INDEX] 2022.11.10 18:37:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][2] creating shard with primary term [1] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] [views][2] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/DeKrbCizSj2NfkoADmnwgA/2, shard=[views][2]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][2] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][3] creating shard with primary term [1] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] [views][3] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/DeKrbCizSj2NfkoADmnwgA/3, shard=[views][3]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][3] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][1] creating shard with primary term [1] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] [views][1] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/DeKrbCizSj2NfkoADmnwgA/1, shard=[views][1]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][1] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:36 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][0] creating shard with primary term [1] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] [views][0] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/DeKrbCizSj2NfkoADmnwgA/0, shard=[views][0]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.IndexService] creating shard_id [views][0] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 32 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=32}]: took [606ms] done applying updated cluster state (version: 32, uuid: HW6jropoRTahpDB0jAehXw) 2022.11.10 18:37:36 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=32} 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 32, uuid: HW6jropoRTahpDB0jAehXw) for [create-index [views], cause [api]] 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=eJzWFyRETf6hF5axfbz8Ug, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=CKZz5f2-SLqUQK5qE07KlA}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=eJzWFyRETf6hF5axfbz8Ug, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=CKZz5f2-SLqUQK5qE07KlA}]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UYIQRPknTF-O1QAGb0a8vg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=QKJHVk88S2uM4Q_3eDzqAw}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=UYIQRPknTF-O1QAGb0a8vg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=QKJHVk88S2uM4Q_3eDzqAw}]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [616ms] 2022.11.10 18:37:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] starting shard [views][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=zok1e_b-QcOVjgs0bjwSUA], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:35.469Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [33], source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:36 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [33] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [33] with uuid [xAgAoe0wQQyuObav1vVEWw], diff size [1168] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:36 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [739ms] 2022.11.10 18:37:36 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:36 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=bUQlpCuZS2OX9GAL6TsjNQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=bqDyGU0GQyOGjMpP4DSN-Q}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=bUQlpCuZS2OX9GAL6TsjNQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=bqDyGU0GQyOGjMpP4DSN-Q}]}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [602ms] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=YFTBjVTcQlSQroKJQ8zuCA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=U3jxRoUgQN6LB9123uD1-A}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=YFTBjVTcQlSQroKJQ8zuCA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=U3jxRoUgQN6LB9123uD1-A}]}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [812ms] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:37 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [437ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=33}]: execute 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [33], source [Publication{term=1, version=33}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 33 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 33 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 33 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=33}]: took [0s] done applying updated cluster state (version: 33, uuid: xAgAoe0wQQyuObav1vVEWw) 2022.11.10 18:37:37 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=33} 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] took [69ms] to notify listeners on successful publication of cluster state (version: 33, uuid: xAgAoe0wQQyuObav1vVEWw) for [shard-started StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][2]], allocationId [zok1e_b-QcOVjgs0bjwSUA], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] starting shard [views][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=0Y0iQR1PRJqd42aQOZ_FKw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:35.469Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] starting shard [views][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=G5XnhTpPQL2aATp4XCPyfw], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:35.469Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:37 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] starting shard [views][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=WzUnBFJ3Rk2utwHAU6bFyg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:35.469Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] took [6ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [34], source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [34] 2022.11.10 18:37:37 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [34] with uuid [YcAUbgVlRUe4H0EzXBzi4Q], diff size [1170] 2022.11.10 18:37:37 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [217ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=34}]: execute 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [34], source [Publication{term=1, version=34}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 34 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 34 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 34 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=34}]: took [0s] done applying updated cluster state (version: 34, uuid: YcAUbgVlRUe4H0EzXBzi4Q) 2022.11.10 18:37:37 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=34} 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 34, uuid: YcAUbgVlRUe4H0EzXBzi4Q) for [shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [master {sonarqube}{W5aIUzOyQZ2OIplyipcVyA}{_B9JmcHdTqqNrinlSom1OA}{127.0.0.1}{127.0.0.1:37767}{cdfhimrsw}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][0]], allocationId [G5XnhTpPQL2aATp4XCPyfw], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][1]], allocationId [WzUnBFJ3Rk2utwHAU6bFyg], primary term [1], message [after new shard recovery]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][3]], allocationId [0Y0iQR1PRJqd42aQOZ_FKw], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] took [4ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [35], source [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [35] 2022.11.10 18:37:37 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [35] with uuid [-q7iSeMtSvanZzu0UCHAfg], diff size [1173] 2022.11.10 18:37:37 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=35}]: execute 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [35], source [Publication{term=1, version=35}] 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 35 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 35 2022.11.10 18:37:37 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][4] creating shard with primary term [1] 2022.11.10 18:37:37 DEBUG es[][o.e.i.IndexService] [views][4] creating using a new path [ShardPath{path=/home/chili/sonarqube-9.7.0.61563/data/es7/nodes/0/indices/DeKrbCizSj2NfkoADmnwgA/4, shard=[views][4]}] 2022.11.10 18:37:37 DEBUG es[][o.e.i.IndexService] creating shard_id [views][4] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store] 2022.11.10 18:37:37 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ... 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 35 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=35}]: took [257ms] done applying updated cluster state (version: 35, uuid: -q7iSeMtSvanZzu0UCHAfg) 2022.11.10 18:37:37 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=35} 2022.11.10 18:37:37 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 35, uuid: -q7iSeMtSvanZzu0UCHAfg) for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:38 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:38 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} 2022.11.10 18:37:38 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=OuUbwn3fQduVNrhfWhyRpA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mB7B5sJmSV66PxVyPLyfTg}]}], last commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=OuUbwn3fQduVNrhfWhyRpA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=mB7B5sJmSV66PxVyPLyfTg}]}] 2022.11.10 18:37:38 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] 2022.11.10 18:37:38 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [794ms] 2022.11.10 18:37:38 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [W5aIUzOyQZ2OIplyipcVyA] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:38 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:38 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] starting shard [views][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=DwTANOl2SWOX89uPBzQcUg], unassigned_info[[reason=INDEX_CREATED], at[2022-11-10T17:37:35.469Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}]) 2022.11.10 18:37:38 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[views][4]]]). 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [2ms] to compute cluster state update for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [36], source [shard-started StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [36] 2022.11.10 18:37:38 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [36] with uuid [oERDo6g5SkeboDdOsVBL1A], diff size [1158] 2022.11.10 18:37:38 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=36}]: execute 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [36], source [Publication{term=1, version=36}] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 36 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 36 2022.11.10 18:37:38 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] 2022.11.10 18:37:38 DEBUG es[][o.e.i.s.ReplicationTracker] adding new retention lease [RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}] to current retention leases [RetentionLeases{primaryTerm=1, version=0, leases={}}] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 36 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=36}]: took [200ms] done applying updated cluster state (version: 36, uuid: oERDo6g5SkeboDdOsVBL1A) 2022.11.10 18:37:38 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=36} 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 36, uuid: oERDo6g5SkeboDdOsVBL1A) for [shard-started StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[views][4]], allocationId [DwTANOl2SWOX89uPBzQcUg], primary term [1], message [after new shard recovery]}]] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [3ms] to compute cluster state update for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(reroute after starting shards)] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:38 DEBUG es[][o.e.c.s.MasterService] took [3ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [put-mapping [views/DeKrbCizSj2NfkoADmnwgA][view]] 2022.11.10 18:37:39 DEBUG es[][o.e.c.m.MetadataMappingService] [views/DeKrbCizSj2NfkoADmnwgA] create_mapping [view] with source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] took [104ms] to compute cluster state update for [put-mapping [views/DeKrbCizSj2NfkoADmnwgA][view]] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [37], source [put-mapping [views/DeKrbCizSj2NfkoADmnwgA][view]] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [37] 2022.11.10 18:37:39 DEBUG es[][o.e.c.c.PublicationTransportHandler] received diff cluster state version [37] with uuid [ttBM9XtITFuaK-2Qvhl5cA], diff size [1156] 2022.11.10 18:37:39 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [484ms]; wrote global metadata [false] and metadata for [1] indices and skipped [6] unchanged indices 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=37}]: execute 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [37], source [Publication{term=1, version=37}] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 37 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 37 2022.11.10 18:37:39 DEBUG es[][o.e.i.m.MapperService] [[views/DeKrbCizSj2NfkoADmnwgA]] added mapping [view], source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 37 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=1, version=37}]: took [0s] done applying updated cluster state (version: 37, uuid: ttBM9XtITFuaK-2Qvhl5cA) 2022.11.10 18:37:39 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=1, version=37} 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 37, uuid: ttBM9XtITFuaK-2Qvhl5cA) for [put-mapping [views/DeKrbCizSj2NfkoADmnwgA][view]] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:39 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [cluster_health (wait_for_events [LANGUID])] 2022.11.10 18:37:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:37:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:37:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:37:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:37:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:37:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:37:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:37:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:38:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:38:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:38:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:38:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:38:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:38:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:38:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:38:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:38:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:38:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:38:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:38:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:38:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:38:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:39:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:39:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:39:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:39:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:39:16 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:39:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:39:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:22 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:39:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:39:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:39:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:39:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:39:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:39:52 DEBUG es[][o.e.m.j.JvmGcMonitorService] [gc][young][480][17] duration [407ms], collections [1]/[1s], total [407ms]/[4.3s], memory [90.7mb]->[90.7mb]/[512mb], all_pools {[young] [31mb]->[31mb]/[0b]}{[old] [56.7mb]->[56.7mb]/[512mb]}{[survivor] [3mb]->[3mb]/[0b]} 2022.11.10 18:39:52 INFO es[][o.e.m.j.JvmGcMonitorService] [gc][480] overhead, spent [407ms] collecting in the last [1s] 2022.11.10 18:39:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:39:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:39:52 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:40:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:40:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:40:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:40:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:40:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:40:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:40:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:40:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:40:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:40:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:40:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:40:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:40:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:40:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:40:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:40:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:40:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:40:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:40:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:40:30 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:40:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:40:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:40:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:40:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:40:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:40:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:40:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:40:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:40:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:40:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:40:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:40:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:40:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:41:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:41:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:41:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:41:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:41:00 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:41:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:41:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:41:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:41:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:41:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:41:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:41:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834305, source='peer recovery'}}}] 2022.11.10 18:41:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101834907, source='peer recovery'}}}] 2022.11.10 18:41:16 DEBUG es[][o.e.m.f.FsHealthService] health check succeeded 2022.11.10 18:41:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:41:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:23 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:41:31 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:41:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:41:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:41:36 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:41:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:41:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:41:37 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:41:40 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=5f_0Y8CwS4u--DZodFMhbQ] on inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=YFv3sAwTRmWCItoFhja8Eg] on inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=-VK0FnMDTjaEUljy3ggIEQ] on inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:41:40 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=oLn8WP7oQ8mWkvafYlPtsg] on inactive 2022.11.10 18:41:42 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:41:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=2, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=6, timestamp=1668102102946, source='peer recovery'}}}] 2022.11.10 18:41:43 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=2, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=4, timestamp=1668102102946, source='peer recovery'}}}] 2022.11.10 18:41:50 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:41:50 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [components][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=n-PxEEq7RA6Oy8ULFDn1YA] on inactive 2022.11.10 18:41:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844227, source='peer recovery'}}}] 2022.11.10 18:41:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101844678, source='peer recovery'}}}] 2022.11.10 18:41:53 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101846759, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101825345, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101824915, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101822065, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101826975, source='peer recovery'}}}] 2022.11.10 18:42:01 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101852339, source='peer recovery'}}}] 2022.11.10 18:42:05 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][0], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=GMRWIDwGRF65UXw29B0Peg] on inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][1], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=23tmdgYBTdCPZP6N_xmAqA] on inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][2], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=CIwaLPyTTeSnxvfsyhdOgw] on inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:42:05 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][3], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=M3AOmI0HRUCJEbqGQOsdkQ] on inactive 2022.11.10 18:42:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:42:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:42:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857126, source='peer recovery'}}}] 2022.11.10 18:42:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101857343, source='peer recovery'}}}] 2022.11.10 18:42:06 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101858837, source='peer recovery'}}}] 2022.11.10 18:42:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:42:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:42:07 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101800427, source='peer recovery'}}}] 2022.11.10 18:42:08 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101803788, source='peer recovery'}}}] 2022.11.10 18:42:08 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=1, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=0, timestamp=1668101812369, source='peer recovery'}}}] 2022.11.10 18:42:10 DEBUG es[][o.e.i.s.IndexShard] shard is now inactive 2022.11.10 18:42:10 DEBUG es[][o.e.i.f.SyncedFlushService] flushing shard [projectmeasures][4], node[W5aIUzOyQZ2OIplyipcVyA], [P], s[STARTED], a[id=4XL2dF06QsCgmLSMRlluMA] on inactive 2022.11.10 18:42:12 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=4, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=17, timestamp=1668101862048, source='peer recovery'}}}] 2022.11.10 18:42:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=2, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=6, timestamp=1668102102946, source='peer recovery'}}}] 2022.11.10 18:42:13 DEBUG es[][o.e.i.s.ReplicationTracker] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=1, version=2, leases={peer_recovery/W5aIUzOyQZ2OIplyipcVyA=RetentionLease{id='peer_recovery/W5aIUzOyQZ2OIplyipcVyA', retainingSequenceNumber=4, timestamp=1668102102946, source='peer recovery'}}}] 2022.11.10 18:42:21 INFO es[][o.e.n.Node] stopping ... 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [metadatas] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [views] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [issues] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [issues/xb_5ZqTkQNC1ZOcDunuYdA] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [metadatas/Oth1bCT6T3iI8grVY9VGqw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [views/DeKrbCizSj2NfkoADmnwgA] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [projectmeasures] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [projectmeasures/WuM0xXvfS_elTVNxiAG9XA] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [rules] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndicesService] [rules/XKhsuPkESGGZvj9wjlKlBg] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:21 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:21 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:21 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:42:21 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=5, max_unsafe_auto_id_timestamp=-1, translog_uuid=cVifa_TFQ4SrUqkHSpk6ww, min_retained_seq_no=0, history_uuid=hCFHK9IFQ12pd9-_JYcffg, es_version=7.17.5, max_seq_no=5}]}], last commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=5, max_unsafe_auto_id_timestamp=-1, translog_uuid=cVifa_TFQ4SrUqkHSpk6ww, min_retained_seq_no=0, history_uuid=hCFHK9IFQ12pd9-_JYcffg, es_version=7.17.5, max_seq_no=5}]}] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=16, max_unsafe_auto_id_timestamp=-1, translog_uuid=Y5A402O6SESH0KTtofqQdA, min_retained_seq_no=0, history_uuid=t7BGTBaSTu-5opvZ-fZOrA, es_version=7.17.5, max_seq_no=16}]}], last commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=16, max_unsafe_auto_id_timestamp=-1, translog_uuid=Y5A402O6SESH0KTtofqQdA, min_retained_seq_no=0, history_uuid=t7BGTBaSTu-5opvZ-fZOrA, es_version=7.17.5, max_seq_no=16}]}] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=t7BGTBaSTu-5opvZ-fZOrA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=Y5A402O6SESH0KTtofqQdA}]}] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=hCFHK9IFQ12pd9-_JYcffg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=cVifa_TFQ4SrUqkHSpk6ww}]}] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndicesService] [projectmeasures/WuM0xXvfS_elTVNxiAG9XA] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndicesService] [components] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndicesService] [components/9Xew6hvQQhe2UCil8G2lug] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:22 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:22 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:22 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [views/DeKrbCizSj2NfkoADmnwgA] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [users] closing ... (reason [SHUTDOWN]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [users/W4A_SYBsRYuRAW9zSwtpuw] closing index service (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [issues/xb_5ZqTkQNC1ZOcDunuYdA] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [1] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=3, max_unsafe_auto_id_timestamp=-1, translog_uuid=QcV-qxo9TiOBWdpP_ld1Zg, min_retained_seq_no=0, history_uuid=B2zJgVIcQruDAYkj81hung, es_version=7.17.5, max_seq_no=3}]}], last commit [CommitPoint{segment[segments_3], userData[{local_checkpoint=3, max_unsafe_auto_id_timestamp=-1, translog_uuid=QcV-qxo9TiOBWdpP_ld1Zg, min_retained_seq_no=0, history_uuid=B2zJgVIcQruDAYkj81hung, es_version=7.17.5, max_seq_no=3}]}] 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] Delete index commit [CommitPoint{segment[segments_2], userData[{es_version=7.17.5, history_uuid=B2zJgVIcQruDAYkj81hung, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_uuid=QcV-qxo9TiOBWdpP_ld1Zg}]}] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] new commit on flush, hasUncommittedChanges:true, force:false, shouldPeriodicallyFlush:false 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [0] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [users/W4A_SYBsRYuRAW9zSwtpuw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [metadatas/Oth1bCT6T3iI8grVY9VGqw] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [2] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [1] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [2] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [3] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [3] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [4] closing... (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.IndexShard] state: [STARTED]->[CLOSED], reason [shutdown] 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] flushing shard on close - this might take some time to sync files to disk 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close now acquiring writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] close acquired writeLock 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [rules/XKhsuPkESGGZvj9wjlKlBg] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.t.Translog] translog closed 2022.11.10 18:42:23 DEBUG es[][o.e.i.e.Engine] engine closed [api] 2022.11.10 18:42:23 DEBUG es[][o.e.i.s.Store] store reference count on close: 0 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndexService] [4] closed (reason: [shutdown]) 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close] 2022.11.10 18:42:23 DEBUG es[][o.e.i.IndicesService] [components/9Xew6hvQQhe2UCil8G2lug] closed... (reason [SHUTDOWN][shutdown]) 2022.11.10 18:42:23 INFO es[][o.e.n.Node] stopped 2022.11.10 18:42:23 INFO es[][o.e.n.Node] closing ... 2022.11.10 18:42:23 INFO es[][o.e.n.Node] closed