Error during SonarQube 9.9 LTA startup

Make sure to tell us:

  • What version are you upgrading from? 8.9.10
  • System information (Operating system, Java version, Database provider/version)

*Operating System: Windows 10

  • Java Version: jdk-17, java version “17.0.10” 2024-01-16 LTS

*Database: Postgresql 12

  • What’s the issue you’re facing?I am trying to upgrade my sonarqube from 8.9 to 9.9 and then to 10.5. While starting Sonarqube 9.9, server is not starting up

sonar.log

2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 79] connection request failed
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/] failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] updated [[host=http://127.0.0.1:9005]] already in blacklist
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 80] start execution
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 80] Request connection for {}->http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 80] connection request failed
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/] failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] updated [[host=http://127.0.0.1:9005]] already in blacklist
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 81] start execution
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 81] Request connection for {}->http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 81] connection request failed
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/] failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] updated [[host=http://127.0.0.1:9005]] already in blacklist
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 82] start execution
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 82] Request connection for {}->http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 82] connection request failed
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/] failed
java.net.ConnectException: Connection refused: no further information
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:842)
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] updated [[host=http://127.0.0.1:9005]] already in blacklist
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] start execution
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] Request connection for {}->http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]}
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:]: Set attribute http.nio.exchange-handler
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:]: Event set [w]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:]: Set timeout 0
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE]: Connected
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] Start connection routing
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] route completed
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Connection route established
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Attempt 1 to execute request
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Target auth state: UNCHALLENGED
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Proxy auth state: UNCHALLENGED
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:]: Set timeout 30000
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> GET / HTTP/1.1
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> X-Elastic-Client-Meta: es=7.17.15,jv=17,t=7.17.15,hc=4.1.4,kt=1.8
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Content-Length: 0
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Host: 127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Connection: Keep-Alive
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> User-Agent: elasticsearch-java/7.17.15 (Java/17.0.10)
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:]: Event set [w]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Request completed
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:w]: 205 bytes written
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "GET / HTTP/1.1[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "X-Elastic-Client-Meta: es=7.17.15,jv=17,t=7.17.15,hc=4.1.4,kt=1.8[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Content-Length: 0[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Host: 127.0.0.1:9005[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "User-Agent: elasticsearch-java/7.17.15 (Java/17.0.10)[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:w]: Event cleared [w]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: 661 bytes read
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "X-elastic-product: Elasticsearch[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "content-length: 540[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "{[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  "name" : "sonarqube",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  "cluster_name" : "sonarqube",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  "cluster_uuid" : "x8xbxDj4QmOMENP_Eae2cg",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  "version" : {[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "number" : "7.17.15",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "build_flavor" : "unknown",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "build_type" : "unknown",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "build_hash" : "0b8ecfb4378335f4689c4223d1f1115f16bef3ba",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "build_date" : "2023-11-10T22:03:46.987399016Z",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "build_snapshot" : false,[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "lucene_version" : "8.11.1",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "minimum_wire_compatibility_version" : "6.8.0",[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "    "minimum_index_compatibility_version" : "6.0.0-beta1"[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  },[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "  "tagline" : "You Know, for Search"[\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "}[\n]"
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 << HTTP/1.1 200 OK
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 << X-elastic-product: Elasticsearch
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 << content-type: application/json; charset=UTF-8
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 << content-length: 540
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE(540)] Response received
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Response received HTTP/1.1 200 OK
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE(540)] Input ready
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Consume content
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] Connection can be kept alive indefinitely
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 83] Response processed
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 83] releasing connection
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://127.0.0.1:9005] can be kept alive indefinitely
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Set timeout 0
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/] returned [HTTP/1.1 200 OK]
2024.05.29 10:28:03 DEBUG app[][o.e.c.RestClient] removed [[host=http://127.0.0.1:9005]] from blacklist
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 540; pos: 540; completed: true]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] start execution
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 84] Request connection for {}->http://127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:9005][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Set timeout 0
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 84] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]}
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Set attribute http.nio.exchange-handler
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:r]: Event set [w]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Attempt 1 to execute request
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Target auth state: UNCHALLENGED
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Proxy auth state: UNCHALLENGED
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:w]: Set timeout 30000
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> X-Elastic-Client-Meta: es=7.17.15,jv=17,t=7.17.15,hc=4.1.4,kt=1.8
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Content-Length: 0
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Host: 127.0.0.1:9005
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Connection: Keep-Alive
2024.05.29 10:28:03 DEBUG app[][o.a.http.headers] http-outgoing-0 >> User-Agent: elasticsearch-java/7.17.15 (Java/17.0.10)
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:w]: Event set [w]
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Request completed
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][rw:w]: 288 bytes written
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "X-Elastic-Client-Meta: es=7.17.15,jv=17,t=7.17.15,hc=4.1.4,kt=1.8[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Content-Length: 0[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Host: 127.0.0.1:9005[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "User-Agent: elasticsearch-java/7.17.15 (Java/17.0.10)[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][org.apache.http.wire] http-outgoing-0 >> "[\r][\n]"
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready
2024.05.29 10:28:03 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:w]: Event cleared [w]
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: 506 bytes read
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "X-elastic-product: Elasticsearch[\r][\n]"
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]"
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "content-length: 385[\r][\n]"
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "[\r][\n]"
2024.05.29 10:28:04 DEBUG app[][org.apache.http.wire] http-outgoing-0 << "{"cluster_name":"sonarqube","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}"
2024.05.29 10:28:04 DEBUG app[][o.a.http.headers] http-outgoing-0 << HTTP/1.1 200 OK
2024.05.29 10:28:04 DEBUG app[][o.a.http.headers] http-outgoing-0 << X-elastic-product: Elasticsearch
2024.05.29 10:28:04 DEBUG app[][o.a.http.headers] http-outgoing-0 << content-type: application/json; charset=UTF-8
2024.05.29 10:28:04 DEBUG app[][o.a.http.headers] http-outgoing-0 << content-length: 385
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE(385)] Response received
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Response received HTTP/1.1 200 OK
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE(385)] Input ready
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Consume content
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 84] Connection can be kept alive indefinitely
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 84] Response processed
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 84] releasing connection
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://127.0.0.1:9005] can be kept alive indefinitely
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Set timeout 0
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://127.0.0.1:9005][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2024.05.29 10:28:04 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:9005/_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow] returned [HTTP/1.1 200 OK]
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 385; pos: 385; completed: true]
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection manager is shutting down
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:49403<->127.0.0.1:9005[ACTIVE][r:r]: Close
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [CLOSED]: Disconnected
2024.05.29 10:28:04 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection manager shut down
2024.05.29 10:28:04 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2024.05.29 10:28:04 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[ElasticSearch] tryToMoveTo ElasticSearch from STARTED to STARTING => false
2024.05.29 10:28:04 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[ElasticSearch] tryToMoveTo Web Server from INIT to STARTING => true
2024.05.29 10:28:04 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[WEB_SERVER] from [C:\Users\thavasikumar.konar\Downloads\sonarqube-developer-9.9.5.90363\sonarqube-9.9.5.90363]: C:\Program Files\Java\jdk-17\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=C:\Users\thavasikumar.konar\Downloads\sonarqube-developer-9.9.5.90363\sonarqube-9.9.5.90363\temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.5.90363.jar;C:\Users\thavasikumar.konar\Downloads\sonarqube-developer-9.9.5.90363\sonarqube-9.9.5.90363\lib\jdbc\postgresql\postgresql-42.5.1.jar org.sonar.server.app.WebServer C:\Users\thavasikumar.konar\Downloads\sonarqube-developer-9.9.5.90363\sonarqube-9.9.5.90363\temp\sq-process15747546223617605953properties
2024.05.29 10:28:04 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[ElasticSearch] tryToMoveTo Web Server from STARTING to STARTED => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [Web Server]: 0
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[Web Server] tryToMoveTo Web Server from STARTED to HARD_STOPPING => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[Web Server] tryToMoveTo Web Server from HARD_STOPPING to FINALIZE_STOPPING => true
2024.05.29 10:28:08 INFO  app[][o.s.a.SchedulerImpl] Process[Web Server] is stopped
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[Web Server] tryToMoveTo Web Server from FINALIZE_STOPPING to STOPPED => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from STARTING to HARD_STOPPING => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo Compute Engine from INIT to HARD_STOPPING => false
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo Web Server from STOPPED to HARD_STOPPING => false
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo ElasticSearch from STARTED to HARD_STOPPING => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo ElasticSearch from HARD_STOPPING to FINALIZE_STOPPING => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[ElasticSearch] tryToMoveTo ElasticSearch from FINALIZE_STOPPING to HARD_STOPPING => false
2024.05.29 10:28:08 INFO  app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2024.05.29 10:28:08 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from HARD_STOPPING to FINALIZE_STOPPING => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from FINALIZE_STOPPING to STOPPED => true
2024.05.29 10:28:08 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2024.05.29 10:28:08 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo ElasticSearch from FINALIZE_STOPPING to STOPPED => true
2024.05.29 10:28:08 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from STOPPED to FINALIZE_STOPPING => false
2024.05.29 10:28:08 DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STOPPED to STOPPING => false

es.log
2024.05.29 10:27:55 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable
java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
	at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:68) [netty-common-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:193) [netty-common-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.util.ConstantPool.<init>(ConstantPool.java:34) [netty-common-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27) [netty-common-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27) [netty-common-4.1.94.Final.jar:4.1.94.Final]
	at org.elasticsearch.http.netty4.Netty4HttpServerTransport.<clinit>(Netty4HttpServerTransport.java:302) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:49) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:84) [elasticsearch-7.17.15.jar:7.17.15]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) [?:?]
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) [?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) [?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) [?:?]
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) [?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?]
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) [?:?]
	at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:84) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.node.Node.<init>(Node.java:483) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.node.Node.<init>(Node.java:309) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.15.jar:7.17.15]
2024.05.29 10:27:55 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true
2024.05.29 10:27:57 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [3000], expire [0s]
2024.05.29 10:27:58 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [-1]
2024.05.29 10:27:59 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10]
2024.05.29 10:28:00 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s]
2024.05.29 10:28:00 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s]
2024.05.29 10:28:00 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s]
2024.05.29 10:28:00 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s]
2024.05.29 10:28:00 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2024.05.29 10:28:00 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2]
2024.05.29 10:28:01 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4]
2024.05.29 10:28:01 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [51.1mb] max filter count [10000]
2024.05.29 10:28:01 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [51.1mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s]
2024.05.29 10:28:01 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2024.05.29 10:28:01 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
2024.05.29 10:28:01 INFO  es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}]
2024.05.29 10:28:01 DEBUG es[][o.e.h.n.Netty4HttpServerTransport] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000]
2024.05.29 10:28:01 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2024.05.29 10:28:01 DEBUG es[][o.e.d.SettingsBasedSeedHostsProvider] using initial hosts [127.0.0.1]
2024.05.29 10:28:01 INFO  es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2024.05.29 10:28:02 INFO  es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2024.05.29 10:28:02 DEBUG es[][o.e.n.Node] initializing HTTP handlers ...
2024.05.29 10:28:02 INFO  es[][o.e.n.Node] initialized
2024.05.29 10:28:02 INFO  es[][o.e.n.Node] starting ...
2024.05.29 10:28:02 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 8
2024.05.29 10:28:02 DEBUG es[][i.n.u.c.GlobalEventExecutor] -Dio.netty.globalEventExecutor.quietPeriodSeconds: 1
2024.05.29 10:28:02 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2024.05.29 10:28:02 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2024.05.29 10:28:02 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true
2024.05.29 10:28:02 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2024.05.29 10:28:02 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable
2024.05.29 10:28:02 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[4], port[49316], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb]
2024.05.29 10:28:02 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1]
2024.05.29 10:28:02 DEBUG es[][i.n.c.DefaultChannelId] Could not invoke ProcessHandle.current().pid();
java.lang.reflect.InvocationTargetException: null
	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
	at io.netty.channel.DefaultChannelId.processHandlePid(DefaultChannelId.java:116) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.DefaultChannelId.defaultProcessId(DefaultChannelId.java:178) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.DefaultChannelId.<clinit>(DefaultChannelId.java:77) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.AbstractChannel.newId(AbstractChannel.java:113) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.AbstractChannel.<init>(AbstractChannel.java:73) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.nio.AbstractNioChannel.<init>(AbstractNioChannel.java:80) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.nio.AbstractNioMessageChannel.<init>(AbstractNioMessageChannel.java:42) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:96) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:89) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:82) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:75) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at org.elasticsearch.transport.CopyBytesServerSocketChannel.<init>(CopyBytesServerSocketChannel.java:38) [transport-netty4-client-7.17.15.jar:7.17.15]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
	at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:44) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:310) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:272) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:268) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:308) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:68) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.lambda$bindToPort$6(TcpTransport.java:441) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:47) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:439) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:414) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:141) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:48) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TransportService.doStart(TransportService.java:318) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:48) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.node.Node.start(Node.java:1176) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:335) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:443) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.15.jar:7.17.15]
Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "manageProcess")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
	at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:416) ~[?:?]
	at java.lang.ProcessHandleImpl.current(ProcessHandleImpl.java:285) ~[?:?]
	at java.lang.ProcessHandle.current(ProcessHandle.java:136) ~[?:?]
	... 45 more
2024.05.29 10:28:02 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 16584 (auto-detected)
2024.05.29 10:28:02 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false
2024.05.29 10:28:02 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false
2024.05.29 10:28:02 DEBUG es[][i.n.u.NetUtilInitializations] Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
2024.05.29 10:28:02 DEBUG es[][i.n.u.NetUtil] Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2024.05.29 10:28:02 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 50:6b:8d:ff:fe:65:67:3d (auto-detected)
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 8
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 9
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 4194304
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: false
2024.05.29 10:28:02 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
2024.05.29 10:28:02 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2024.05.29 10:28:02 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2024.05.29 10:28:02 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2024.05.29 10:28:02 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:49316}
2024.05.29 10:28:02 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:49316}, bound_addresses {127.0.0.1:49316}
2024.05.29 10:28:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote full state with [0] indices
2024.05.29 10:28:03 INFO  es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks
2024.05.29 10:28:03 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s]
2024.05.29 10:28:03 INFO  es[][o.e.c.c.Coordinator] cluster UUID [x8xbxDj4QmOMENP_Eae2cg]
2024.05.29 10:28:03 DEBUG es[][o.e.t.TransportService] now accepting incoming requests
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 6 (was null, lastKnownLeader was [Optional.empty])
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=89, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.05.29 10:28:03 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s]
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=89, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=608, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=6}, isClosed=false} requesting pre-votes from [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}]
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=6, lastAcceptedTerm=6, lastAcceptedVersion=12}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=6}, isClosed=false} added PreVoteResponse{currentTerm=6, lastAcceptedTerm=6, lastAcceptedVersion=12} from {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, starting election
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=7,node={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}] with term 7
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [6] due to StartJoinRequest{term=7,node={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=12, sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=7,node={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=12, sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}} from [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=6 lastAcceptedVersion=12
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [7] with VoteCollection{votes=[YzGF1hTbQSm-3cU5NwyV4A], joins=[Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=12, sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 7 (was CANDIDATE, lastKnownLeader was [Optional.empty])
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [21ms] to compute cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [13], source [elected-as-master ([1] nodes joined)[{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.05.29 10:28:03 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 13, delta: master node changed {previous [], current [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}]}
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [13]
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [13] with size [347]
2024.05.29 10:28:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote full state with [0] indices
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=7, version=13}]: execute
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [13], source [Publication{term=7, version=13}]
2024.05.29 10:28:03 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}]}, term: 7, version: 13, reason: Publication{term=7, version=13}
2024.05.29 10:28:03 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}
2024.05.29 10:28:03 DEBUG es[][o.e.c.NodeConnectionsService] connected to {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 13
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 13
2024.05.29 10:28:03 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered
2024.05.29 10:28:03 DEBUG es[][o.e.c.r.a.DiskThresholdMonitor] skipping monitor as the cluster state is not recovered yet
2024.05.29 10:28:03 DEBUG es[][o.e.c.l.NodeAndClusterIdStateListener] Received cluster state update. Setting nodeId=[YzGF1hTbQSm-3cU5NwyV4A] and clusterUuid=[x8xbxDj4QmOMENP_Eae2cg]
2024.05.29 10:28:03 DEBUG es[][o.e.g.GatewayService] performing state recovery...
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=7, version=13}]: took [0s] done applying updated cluster state (version: 13, uuid: DmO6ngfpRGGQLaFHeyazYg)
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.JoinHelper] releasing [1] connections on successful cluster state application
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=7, version=13}
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=12, sourceNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [2ms] to notify listeners on successful publication of cluster state (version: 13, uuid: DmO6ngfpRGGQLaFHeyazYg) for [elected-as-master ([1] nodes joined)[{sonarqube}{YzGF1hTbQSm-3cU5NwyV4A}{D0XTYpN_RsikfZOX0VPSsg}{127.0.0.1}{127.0.0.1:49316}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(post-join reroute)]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [7ms] to compute cluster state update for [cluster_reroute(post-join reroute)]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on unchanged cluster state for [cluster_reroute(post-join reroute)]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [update snapshot after shards started [false] or node configuration changed [true]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [update snapshot after shards started [false] or node configuration changed [true]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [update snapshot after shards started [false] or node configuration changed [true]]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [local-gateway-elected-state]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [local-gateway-elected-state]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [14], source [local-gateway-elected-state]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [14]
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [14] with size [295]
2024.05.29 10:28:03 DEBUG es[][o.e.h.AbstractHttpServerTransport] Bound http to address {127.0.0.1:9005}
2024.05.29 10:28:03 INFO  es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9005}, bound_addresses {127.0.0.1:9005}
2024.05.29 10:28:03 INFO  es[][o.e.n.Node] started
2024.05.29 10:28:03 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [0] indices and skipped [0] unchanged indices
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=7, version=14}]: execute
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [14], source [Publication{term=7, version=14}]
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 14
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 14
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 14
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=7, version=14}]: took [0s] done applying updated cluster state (version: 14, uuid: Cx2XCaChTyaZMaJDmRpJaA)
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=7, version=14}
2024.05.29 10:28:03 INFO  es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
2024.05.29 10:28:03 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 14, uuid: Cx2XCaChTyaZMaJDmRpJaA) for [local-gateway-elected-state]
2024.05.29 10:28:03 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2024.05.29 10:28:03 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2024.05.29 10:28:03 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@13394740
2024.05.29 10:28:03 DEBUG es[][i.n.h.c.c.Brotli] brotli4j not in the classpath; Brotli support will be unavailable.
2024.05.29 10:28:03 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled
2024.05.29 10:28:03 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled
2024.05.29 10:28:03 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.chunkSize: disabled
2024.05.29 10:28:03 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.blocking: disabled
2024.05.29 10:28:03 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.batchFastThreadLocalOnly: disabled
2024.05.29 10:28:03 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=608, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election

Unable to identify the issue considering the java version is correct, ports are open and no permission issues for the user.

Can someone please assist here?

Regards
Kumar

Hey there.

What do all the log files say?

Hi Colin,

I have investigated the logs and end up being blocked on the below error.

2024.05.30 10:07:15 DEBUG es[][i.n.c.DefaultChannelId] Could not invoke ProcessHandle.current().pid();
java.lang.reflect.InvocationTargetException: null
	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
	at io.netty.channel.DefaultChannelId.processHandlePid(DefaultChannelId.java:116) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.DefaultChannelId.defaultProcessId(DefaultChannelId.java:178) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.DefaultChannelId.<clinit>(DefaultChannelId.java:77) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.AbstractChannel.newId(AbstractChannel.java:113) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.AbstractChannel.<init>(AbstractChannel.java:73) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.nio.AbstractNioChannel.<init>(AbstractNioChannel.java:80) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.nio.AbstractNioMessageChannel.<init>(AbstractNioMessageChannel.java:42) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:96) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:89) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:82) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:75) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at org.elasticsearch.transport.CopyBytesServerSocketChannel.<init>(CopyBytesServerSocketChannel.java:38) [transport-netty4-client-7.17.15.jar:7.17.15]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
	at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
	at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:44) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:310) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:272) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:268) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:308) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:68) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.lambda$bindToPort$6(TcpTransport.java:441) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:47) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:439) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:414) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:141) [transport-netty4-client-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:48) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.transport.TransportService.doStart(TransportService.java:318) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:48) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.node.Node.start(Node.java:1176) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:335) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:443) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.cli.Command.main(Command.java:77) [elasticsearch-cli-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) [elasticsearch-7.17.15.jar:7.17.15]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) [elasticsearch-7.17.15.jar:7.17.15]
Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "manageProcess")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
	at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:416) ~[?:?]
	at java.lang.ProcessHandleImpl.current(ProcessHandleImpl.java:285) ~[?:?]
	at java.lang.ProcessHandle.current(ProcessHandle.java:136) ~[?:?]
	... 45 more

I have validated that the Java version is jdk 17, port 9005 is open and the folder temp has all permissions for Full Control. There is not much beyond this that we can configure from our end. The error seems to be related to RunTimePermission. If I know the name of permission, I can add it to special permissions. Is there any other alternate to this? Attaching the es.log & sonar.log as reference.

Regards
Kumar
es.log (93.9 KB)
sonar.log (265.3 KB)

Thanks. I think that might be a red herring.

Have you stopped your 8.9 instance of SonarQube before starting up the 9.9 instance? I’d go even further suggesting you make sure you have no running Java processes (jps) before starting the 9.9 instance.

Hi Colin,

I had stopped all services before starting 9.9. 8.9 instance was turned off as I needed to take a backup of DB. No other java process is running. Do I need to restart the setup if there is no other issue?

Regards
Kumar

My understanding is that your SonarQube instance is not started at the moment (because it’s failing to start). Is that not the case?

Hi Colin,

That is correct. My sonarqube instance does not start.

Regards
Kumar

2024.05.30 10:07:23 INFO app[][o.s.a.SchedulerImpl] Process[Web Server] is stopped

Do you have a web.log file in your logs folder? Can you share it?