I pasted the web logs showing a failure to load service above, here is the es.log entry:
2019.07.11 14:57:43 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{gjdecsVwQbyz8F5FELoKTw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [SurqoOi_R5SN4SCtrerKVA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [SurqoOi_R5SN4SCtrerKVA], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [48.9s] above the warn threshold of 30s
2019.07.11 14:57:43 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [SurqoOi_R5SN4SCtrerKVA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [SurqoOi_R5SN4SCtrerKVA], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [48.9s] above the warn threshold of 30s
2019.07.11 14:58:28 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1NwnK47STUK77xz_8Kh4qw], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1NwnK47STUK77xz_8Kh4qw], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [43.4s] above the warn threshold of 30s
2019.07.11 14:58:28 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{gjdecsVwQbyz8F5FELoKTw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1NwnK47STUK77xz_8Kh4qw], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1NwnK47STUK77xz_8Kh4qw], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [43.4s] above the warn threshold of 30s
2019.07.11 14:59:17 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[(C:)]], net usable_space [122.1gb], net total_space [149.5gb], types [NTFS]
2019.07.11 14:59:17 INFO es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2019.07.11 14:59:17 INFO es[][o.e.n.Node] node name [sonarqube], node ID [m-OY1cl5Sgmbu20r4RukRg]
2019.07.11 14:59:17 INFO es[][o.e.n.Node] version[6.6.2], pid[6780], build[unknown/unknown/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Windows Server 2016/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_211/25.211-b12]
2019.07.11 14:59:17 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\SonarQube\var\sonarqube\temp\es6, -XX:ErrorFile=C:\SonarQube\logs\es_hs_err_pid%p.log, -Xms512m, -Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\SonarQube\elasticsearch, -Des.path.conf=C:\SonarQube\var\sonarqube\temp\conf\es]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [mapper-extras]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [percolator]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [reindex]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [repository-url]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2019.07.11 14:59:18 INFO es[][o.e.p.PluginsService] no plugins loaded
2019.07.11 14:59:21 WARN es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2019.07.11 14:59:22 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings]
2019.07.11 14:59:22 INFO es[][o.e.n.Node] initialized
2019.07.11 14:59:22 INFO es[][o.e.n.Node] starting ...
2019.07.11 14:59:22 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2019.07.11 14:59:25 INFO es[][o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{fXO2NbTaQ1Si6euaeZLeVA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}
2019.07.11 14:59:25 INFO es[][o.e.c.s.ClusterApplierService] new_master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{fXO2NbTaQ1Si6euaeZLeVA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{fXO2NbTaQ1Si6euaeZLeVA}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
2019.07.11 14:59:25 INFO es[][o.e.n.Node] started
2019.07.11 14:59:26 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state
2019.07.11 15:00:17 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[(C:)]], net usable_space [122.1gb], net total_space [149.5gb], types [NTFS]
2019.07.11 15:00:17 INFO es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2019.07.11 15:00:18 INFO es[][o.e.n.Node] node name [sonarqube], node ID [m-OY1cl5Sgmbu20r4RukRg]
2019.07.11 15:00:18 INFO es[][o.e.n.Node] version[6.6.2], pid[6052], build[unknown/unknown/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Windows Server 2016/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_211/25.211-b12]
2019.07.11 15:00:18 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\SonarQube\var\sonarqube\temp\es6, -XX:ErrorFile=C:\SonarQube\logs\es_hs_err_pid%p.log, -Xms512m, -Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\SonarQube\elasticsearch, -Des.path.conf=C:\SonarQube\var\sonarqube\temp\conf\es]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [mapper-extras]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [percolator]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [reindex]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [repository-url]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2019.07.11 15:00:18 INFO es[][o.e.p.PluginsService] no plugins loaded
2019.07.11 15:00:21 WARN es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2019.07.11 15:00:22 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings]
2019.07.11 15:00:23 INFO es[][o.e.n.Node] initialized
2019.07.11 15:00:23 INFO es[][o.e.n.Node] starting ...
2019.07.11 15:00:23 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2019.07.11 15:00:26 INFO es[][o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}
2019.07.11 15:00:26 INFO es[][o.e.c.s.ClusterApplierService] new_master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
2019.07.11 15:00:26 INFO es[][o.e.n.Node] started
2019.07.11 15:00:26 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state
2019.07.11 15:04:05 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [L3tXgTBsS36nevLzNT-SMA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [L3tXgTBsS36nevLzNT-SMA], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [yirht4LvSUOdXX1m00JfeA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [yirht4LvSUOdXX1m00JfeA], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [44.6s] above the warn threshold of 30s
2019.07.11 15:04:05 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [L3tXgTBsS36nevLzNT-SMA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [L3tXgTBsS36nevLzNT-SMA], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [yirht4LvSUOdXX1m00JfeA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [yirht4LvSUOdXX1m00JfeA], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [44.6s] above the warn threshold of 30s
2019.07.11 15:04:35 WARN es[][o.e.t.TcpTransport] send message failed [channel: Netty4TcpChannel{localAddress=0.0.0.0/0.0.0.0:9001, remoteAddress=/127.0.0.1:51893}]
java.nio.channels.ClosedChannelException: null
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[?:?]
2019.07.11 15:04:57 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [50.3s] above the warn threshold of 30s
2019.07.11 15:04:57 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [15] source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [6VOqe9wiQs6YCokZyMtcxA], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [50.3s] above the warn threshold of 30s
2019.07.11 15:05:04 WARN es[][o.e.m.j.JvmGcMonitorService] [gc][277] overhead, spent [1.5s] collecting in the last [1.6s]
2019.07.11 15:05:50 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [17] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [H1Bj_XmUSFKP_9YoP-ln7A], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [H1Bj_XmUSFKP_9YoP-ln7A], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [51.6s] above the warn threshold of 30s
2019.07.11 15:05:50 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[components][3]], allocationId [H1Bj_XmUSFKP_9YoP-ln7A], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [H1Bj_XmUSFKP_9YoP-ln7A], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [51.6s] above the warn threshold of 30s
2019.07.11 15:06:37 WARN es[][o.e.c.s.MasterService] cluster state update task [shard-started StartedShardEntry{shardId [[components][2]], allocationId [oWndcW4hRb-EmevC2shhRg], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [oWndcW4hRb-EmevC2shhRg], message [after existing store recovery; bootstrap_history_uuid=false]}]] took [46.3s] above the warn threshold of 30s
2019.07.11 15:06:37 WARN es[][o.e.c.s.ClusterApplierService] cluster state applier task [apply cluster state (from master [master {sonarqube}{m-OY1cl5Sgmbu20r4RukRg}{2p3DF7lbRai_v1UFQb6RiQ}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [19] source [shard-started StartedShardEntry{shardId [[components][2]], allocationId [oWndcW4hRb-EmevC2shhRg], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [oWndcW4hRb-EmevC2shhRg], message [after existing store recovery; bootstrap_history_uuid=false]}]]])] took [46.3s] above the warn threshold of 30s
2019.07.11 15:06:39 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]] ...]).