SonarQube is not starting

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension) → i am trying sonarqube-10.5.0.89998
  • how is SonarQube deployed: zip, Docker, Helm → i am using zip, on Windows 10 64-bit machine, connecting with Postgres 16.2; java version openjdk-21.0.3
  • what are you trying to achieve —> trying to start SonarQube
  • what have you tried so far to achieve this → i have set log levels to DEBUG, also tried setting Elastic search port to 0 so that available port is assigned sonar.search.port=0
    some basic essential configuration appears to be missing

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

trying to share the log contents from sonar.log and es.log
web.log has not entries

first one is es.log

2024.05.09 19:53:49 DEBUG es[][o.e.p.PluginsService] Loading bundle: apm
2024.05.09 19:53:49 DEBUG es[][o.e.p.PluginsService] Loading bundle: apm, modular
2024.05.09 19:53:49 DEBUG es[][o.e.p.PluginsService] Loading bundle: creating module layer and loader for module org.elasticsearch.telemetry.apm
2024.05.09 19:53:49 DEBUG es[][o.e.p.PluginsService] Loading bundle: created module layer and loader for module org.elasticsearch.telemetry.apm
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-core]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [old-lucene-versions]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [rest-root]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-redact]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-security]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [aggregations]
2024.05.09 19:53:49 INFO  es[][o.e.p.PluginsService] loaded module [apm]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_coordination], size [2], queue size [1k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_worker], size [7], queue size [unbounded]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot_meta], core [1], max [12], keep alive [30s]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [8], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_write], size [2], queue size [1.5k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [2], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_write], size [2], queue size [1k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [2], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [auto_complete], size [1], queue size [100]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [7], queue size [1k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [cluster_coordination], size [1], queue size [unbounded]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [2], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [8], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [4], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [7], queue size [1k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_read], size [2], queue size [2k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [system_critical_read], size [2], queue size [2k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [4], queue size [10k]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [2], keep alive [5m]
2024.05.09 19:53:49 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100]
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.l.InternalLoggerFactory] Using Log4J2 as the default logging framework
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe)
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 21
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, {int,long}): unavailable
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 536870912 bytes (maybe)
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: C:\SonarQube\sonarqube-10.5.0.89998\temp (java.io.tmpdir)
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] Platform: Windows
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable
java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
	at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:68) ~[?:?]
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:193) ~[?:?]
	at io.netty.util.ConstantPool.<init>(ConstantPool.java:34) ~[?:?]
	at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27) ~[?:?]
	at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27) ~[?:?]
	at org.elasticsearch.http.netty4.Netty4HttpServerTransport.<clinit>(Netty4HttpServerTransport.java:334) ~[?:?]
	at org.elasticsearch.transport.netty4.Netty4Plugin.getSettings(Netty4Plugin.java:50) ~[?:?]
	at org.elasticsearch.plugins.PluginsService.lambda$flatMap$1(PluginsService.java:263) ~[elasticsearch-8.11.0.jar:?]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) ~[?:?]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
	at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) ~[?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) ~[?:?]
	at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) ~[?:?]
	at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) ~[?:?]
	at java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627) ~[?:?]
	at org.elasticsearch.node.Node.<init>(Node.java:469) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.node.Node.<init>(Node.java:344) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:236) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:236) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73) ~[elasticsearch-8.11.0.jar:?]
2024.05.09 19:53:50 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true
2024.05.09 19:53:52 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [3000], expire [0s]
2024.05.09 19:53:53 DEBUG es[][o.e.e.NodeEnvironment] using node location [DataPath{path=C:\SonarQube\sonarqube-10.5.0.89998\data\es8, indicesPath=C:\SonarQube\sonarqube-10.5.0.89998\data\es8\indices, fileStore=Windows (C:), majorDeviceNumber=-1, minorDeviceNumber=-1}]
2024.05.09 19:53:53 DEBUG es[][o.e.e.NodeEnvironment] node data locations details:
 -> C:\SonarQube\sonarqube-10.5.0.89998\data\es8, free_space [126.8gb], usable_space [126.8gb], total_space [475.6gb], mount [Windows (C:)], type [NTFS]
2024.05.09 19:53:53 INFO  es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true]
2024.05.09 19:53:53 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [SxtzN79ZQFWGAyYlxzGWMA], cluster name [sonarqube], roles [data_frozen, ingest, data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml]
2024.05.09 19:53:53 DEBUG es[][o.e.i.IngestService] registered ingest processor types: [redact, set_security_user]
2024.05.09 19:53:54 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10]
2024.05.09 19:53:54 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s]
2024.05.09 19:53:54 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s]
2024.05.09 19:53:54 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s]
2024.05.09 19:53:54 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s]
2024.05.09 19:53:54 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2024.05.09 19:53:54 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2]
2024.05.09 19:53:54 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4]
2024.05.09 19:53:54 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [51.1mb] max filter count [10000]
2024.05.09 19:53:54 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [51.1mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s]
2024.05.09 19:53:54 DEBUG es[][o.e.x.c.s.SSLService] using ssl settings [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2, TLSv1.1]]]
2024.05.09 19:53:54 DEBUG es[][o.e.x.c.s.SSLService] SSL configuration [xpack.security.transport.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2, TLSv1.1]]]
2024.05.09 19:53:54 DEBUG es[][o.e.x.c.s.SSLService] SSL configuration [xpack.security.http.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2, TLSv1.1]]]
2024.05.09 19:53:54 INFO  es[][o.e.x.s.Security] Security is disabled
2024.05.09 19:53:55 DEBUG es[][o.e.a.ActionModule] Using custom REST interceptor from plugin org.elasticsearch.xpack.security.Security
2024.05.09 19:53:55 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2024.05.09 19:53:55 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
2024.05.09 19:53:55 INFO  es[][o.e.t.n.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=1mb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=4mb, heap_size=512mb}]
2024.05.09 19:53:55 DEBUG es[][o.e.h.n.Netty4HttpServerTransport] using max_chunk_size[8kb], max_header_size[16kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000]
2024.05.09 19:53:55 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2024.05.09 19:53:55 DEBUG es[][o.e.d.SettingsBasedSeedHostsProvider] using initial hosts [127.0.0.1:49628, [::1]:49628]
2024.05.09 19:53:55 INFO  es[][o.e.d.DiscoveryModule] using discovery type [single-node] and seed hosts providers [settings]
2024.05.09 19:53:56 DEBUG es[][o.e.n.Node] initializing HTTP handlers ...
2024.05.09 19:53:56 INFO  es[][o.e.n.Node] initialized
2024.05.09 19:53:56 INFO  es[][o.e.n.Node] starting ...
2024.05.09 19:53:56 DEBUG es[][o.e.l.ClusterStateLicenseService] initializing license state
2024.05.09 19:53:56 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 8
2024.05.09 19:53:56 DEBUG es[][i.n.u.c.GlobalEventExecutor] -Dio.netty.globalEventExecutor.quietPeriodSeconds: 1
2024.05.09 19:53:56 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2024.05.09 19:53:56 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2024.05.09 19:53:56 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true
2024.05.09 19:53:56 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2024.05.09 19:53:56 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable
2024.05.09 19:53:56 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[4], port[49628], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb]
2024.05.09 19:53:56 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1]
2024.05.09 19:53:56 DEBUG es[][i.n.c.DefaultChannelId] Could not invoke ProcessHandle.current().pid();
java.lang.reflect.InvocationTargetException: null
	at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:118) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
	at io.netty.channel.DefaultChannelId.processHandlePid(DefaultChannelId.java:116) ~[?:?]
	at io.netty.channel.DefaultChannelId.defaultProcessId(DefaultChannelId.java:178) ~[?:?]
	at io.netty.channel.DefaultChannelId.<clinit>(DefaultChannelId.java:77) ~[?:?]
	at io.netty.channel.AbstractChannel.newId(AbstractChannel.java:113) ~[?:?]
	at io.netty.channel.AbstractChannel.<init>(AbstractChannel.java:73) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel.<init>(AbstractNioChannel.java:80) ~[?:?]
	at io.netty.channel.nio.AbstractNioMessageChannel.<init>(AbstractNioMessageChannel.java:42) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:96) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:89) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:82) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:75) ~[?:?]
	at org.elasticsearch.transport.netty4.CopyBytesServerSocketChannel.<init>(CopyBytesServerSocketChannel.java:39) ~[?:?]
	at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) ~[?:?]
	at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
	at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:44) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:310) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:272) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:268) ~[?:?]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:318) ~[?:?]
	at org.elasticsearch.transport.netty4.Netty4Transport.bind(Netty4Transport.java:70) ~[?:?]
	at org.elasticsearch.transport.TcpTransport.lambda$bindToPort$5(TcpTransport.java:499) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:44) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:497) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:472) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:154) ~[?:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:50) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.transport.TransportService.doStart(TransportService.java:331) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:50) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.node.Node.start(Node.java:1489) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:458) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:251) ~[elasticsearch-8.11.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73) ~[elasticsearch-8.11.0.jar:?]
Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "manageProcess")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:488) ~[?:?]
	at java.security.AccessController.checkPermission(AccessController.java:1071) ~[?:?]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?]
	at java.lang.ProcessHandleImpl.current(ProcessHandleImpl.java:305) ~[?:?]
	at java.lang.ProcessHandle.current(ProcessHandle.java:136) ~[?:?]
	at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?]
	... 34 more
2024.05.09 19:53:56 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 15832 (auto-detected)
2024.05.09 19:53:56 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false
2024.05.09 19:53:56 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false
2024.05.09 19:53:56 DEBUG es[][i.n.u.NetUtilInitializations] Loopback interface: loopback_3 (Software Loopback Interface 1, 0:0:0:0:0:0:0:1)
2024.05.09 19:53:56 DEBUG es[][i.n.u.NetUtil] Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2024.05.09 19:53:56 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: d6:06:f5:ff:fe:ee:77:66 (auto-detected)
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 8
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 9
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 4194304
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: false
2024.05.09 19:53:56 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
2024.05.09 19:53:56 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2024.05.09 19:53:56 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2024.05.09 19:53:56 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2024.05.09 19:53:56 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:49628}
2024.05.09 19:53:56 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:49628}, bound_addresses {127.0.0.1:49628}
2024.05.09 19:53:56 DEBUG es[][o.e.g.PersistedClusterStateService] checking cluster state integrity in [C:\SonarQube\sonarqube-10.5.0.89998\data\es8\_state]
2024.05.09 19:53:56 DEBUG es[][o.e.g.PersistedClusterStateService] loading cluster state from commit [segments_13] in [NIOFSDirectory@C:\SonarQube\sonarqube-10.5.0.89998\data\es8\_state lockFactory=org.apache.lucene.store.NativeFSLockFactory@66c91224]: creationTime=2024-05-09T23:05:22.978997Z, lastModifiedTime=2024-05-09T23:05:22.9830689Z, lastAccessTime=2024-05-09T23:53:56.4546138Z
2024.05.09 19:53:56 DEBUG es[][o.e.g.PersistedClusterStateService] cluster state commit user data: {cluster_uuid=moefDsPSTSuMYLKgtAs71g, cluster_uuid_committed=true, current_term=6, last_accepted_version=25, node_id=SxtzN79ZQFWGAyYlxzGWMA, node_version=8110099, oldest_index_version=8500003}
2024.05.09 19:53:56 DEBUG es[][o.e.g.PersistedClusterStateService] loading cluster state from segment: _k(9.8.0):c1:[diagnostics={lucene.version=9.8.0, source=flush, timestamp=1715295922555, java.runtime.version=21.0.3+7-LTS-152, os=Windows 10, java.vendor=Oracle Corporation, os.arch=amd64, os.version=10.0}]:[attributes={Lucene90StoredFieldsFormat.mode=BEST_SPEED}] :id=cngx0yqm28kupiea4d8xmb9t3
2024.05.09 19:53:56 DEBUG es[][o.e.g.PersistedClusterStateService] writing full cluster state took [206ms]; wrote global metadata, [0] mappings, and metadata for [0] indices
2024.05.09 19:53:56 INFO  es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks
2024.05.09 19:53:56 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s]
2024.05.09 19:53:56 INFO  es[][o.e.c.c.ClusterBootstrapService] this node is locked into cluster UUID [moefDsPSTSuMYLKgtAs71g] and will not attempt further cluster bootstrapping
2024.05.09 19:53:56 DEBUG es[][o.e.t.TransportService] now accepting incoming requests
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 6 (was null, lastKnownLeader was [Optional.empty])
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.Coordinator] starting election scheduler, expecting votes [VoteCollection{votes=[SxtzN79ZQFWGAyYlxzGWMA], joins=[]}]
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=46, ElectionScheduler{attempt=1,isClosed=false,ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.05.09 19:53:56 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s]
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=46, ElectionScheduler{attempt=1,isClosed=false,ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=676, ElectionScheduler{attempt=2,isClosed=false,ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.05.09 19:53:56 DEBUG es[][o.e.c.c.StatefulPreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, currentTerm=6}, isClosed=false} requesting pre-votes from [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}]
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.StatefulPreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}=PreVoteResponse{currentTerm=6, lastAcceptedTerm=6, lastAcceptedVersion=25}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, currentTerm=6}, isClosed=false} added PreVoteResponse{currentTerm=6, lastAcceptedTerm=6, lastAcceptedVersion=25} from {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, starting election
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.Coordinator] starting election for {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube} in term 7
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}] with term 7
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [6] due to StartJoinRequest{term=7,node={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}}
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, compatibilityVersions=CompatibilityVersions[transportVersion=8512001, systemIndexMappingsVersion={.security-tokens-7=MappingsVersion[version=1, hash=576296021], .security-7=MappingsVersion[version=1, hash=-1061511639], .security-profile-8=MappingsVersion[version=1, hash=-909540896], .synonyms-2=MappingsVersion[version=1, hash=-888080772], .tasks=MappingsVersion[version=0, hash=-945584329]}], minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=25, sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, targetNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}}]}
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=7,node={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}} from {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [joining {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}]: execute
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [joining {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}]: took [0s] no change in cluster state
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=25, sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, targetNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}} from [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=6 lastAcceptedVersion=25
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [7] with VoteCollection{votes=[SxtzN79ZQFWGAyYlxzGWMA], joins=[Join{term=7, lastAcceptedTerm=6, lastAcceptedVersion=25, sourceNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}, targetNode={sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}}]}
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 7 (was CANDIDATE, lastKnownLeader was [Optional.empty])
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined in term 7)[_FINISH_ELECTION_, {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003} completing election]]
2024.05.09 19:53:57 DEBUG es[][o.e.c.c.NodeJoinExecutor] received a join request for an existing node [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}]
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.MasterService] took [15ms] to compute cluster state update for [elected-as-master ([1] nodes joined in term 7)[_FINISH_ELECTION_, {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003} completing election]]
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [26], source [elected-as-master ([1] nodes joined in term 7)[_FINISH_ELECTION_, {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003} completing election]]
2024.05.09 19:53:57 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined in term 7)[_FINISH_ELECTION_, {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003} completing election], term: 7, version: 26, delta: master node changed {previous [], current [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}]}
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [26]
2024.05.09 19:53:57 DEBUG es[][o.e.g.PersistedClusterStateService] writing full cluster state took [0ms]; wrote global metadata, [0] mappings, and metadata for [0] indices
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=7, version=26}]: execute
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [26], source [Publication{term=7, version=26}]
2024.05.09 19:53:57 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}]}, term: 7, version: 26, reason: Publication{term=7, version=26}
2024.05.09 19:53:57 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{SxtzN79ZQFWGAyYlxzGWMA}{0uB_HHwERIyUlRKuwdb4Hg}{sonarqube}{127.0.0.1}{127.0.0.1:49628}{cdfhilmrstw}{8.11.0}{7000099-8500003}{xpack.installed=true, rack_id=sonarqube}
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 26
2024.05.09 19:53:57 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 26
2024.05.09 19:53:57 DEBUG es[][o.e.i.SystemIndexMappingUpdateService] Waiting until state has been recovered
2024.05.09 19:53:57 DEBUG es[][o.e.h.n.LocalHealthMonitor] Resetting the health monitoring because the master node changed, current health node is null.
2024.05.09 19:53:57 DEBUG es[][o.e.l.ClusterStateLicenseService] skipped license notifications reason: [1,state not recovered / initialized, blocks READ,WRITE,METADATA_READ,METADATA_WRITE]
2024.05.09 19:53:57 DEBUG es[][o.e.g.GatewayService] performing state recovery...

next is sonar.log

2024.05.09 19:53:41 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory C:\SonarQube\sonarqube-10.5.0.89998\temp
2024.05.09 19:53:41 DEBUG app[][o.s.a.NodeLifecycle] main tryToMoveTo from INIT to STARTING => true
2024.05.09 19:53:41 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] main tryToMoveTo ElasticSearch from INIT to STARTING => true
2024.05.09 19:53:41 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:49627, TCP: 127.0.0.1:49628]
2024.05.09 19:53:41 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [C:\SonarQube\sonarqube-10.5.0.89998\elasticsearch]: C:\Program Files\Java\jdk-21\bin\java -Xms4m -Xmx64m -XX:+UseSerialGC -Dcli.name=server -Dcli.script=./bin/elasticsearch -Dcli.libs=lib/tools/server-cli -Des.path.home=C:\SonarQube\sonarqube-10.5.0.89998\elasticsearch -Des.path.conf=C:\SonarQube\sonarqube-10.5.0.89998\temp\conf\es -Des.distribution.type=tar -cp C:\SonarQube\sonarqube-10.5.0.89998\elasticsearch\lib\*;C:\SonarQube\sonarqube-10.5.0.89998\elasticsearch\lib\cli-launcher\* org.elasticsearch.launcher.CliToolLauncher
2024.05.09 19:53:41 DEBUG app[][j.l.ProcessBuilder] ProcessBuilder.start(): pid: 16396, dir: C:\SonarQube\sonarqube-10.5.0.89998\elasticsearch, cmd: "C:\Program Files\Java\jdk-21\bin\java"
java.lang.RuntimeException: ProcessBuilder.start() debug
	at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1147)
	at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1089)
	at org.sonar.application.ProcessLauncherImpl$JavaLangProcessBuilder.start(ProcessLauncherImpl.java:372)
	at org.sonar.application.ProcessLauncherImpl.launchJava(ProcessLauncherImpl.java:227)
	at org.sonar.application.ProcessLauncherImpl.launch(ProcessLauncherImpl.java:96)
	at org.sonar.application.SchedulerImpl.lambda$tryToStartProcess$2(SchedulerImpl.java:192)
	at org.sonar.application.process.ManagedProcessHandler.start(ManagedProcessHandler.java:76)
	at org.sonar.application.SchedulerImpl.tryToStartProcess(SchedulerImpl.java:190)
	at org.sonar.application.SchedulerImpl.tryToStartEs(SchedulerImpl.java:142)
	at org.sonar.application.SchedulerImpl.tryToStartAll(SchedulerImpl.java:134)
	at org.sonar.application.SchedulerImpl.schedule(SchedulerImpl.java:113)
	at org.sonar.application.App.start(App.java:59)
	at org.sonar.application.App.main(App.java:81)
2024.05.09 19:53:41 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] main tryToMoveTo ElasticSearch from STARTING to STARTED => true
2024.05.09 19:53:41 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2024.05.09 19:53:41 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:49627]
2024.05.09 19:53:41 DEBUG app[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:570a119742c4e3cc, Subject:CN=Actalis Authentication Root CA, O=Actalis S.p.A./03358520967, L=Milan, C=IT, Issuer:CN=Actalis Authentication Root CA, O=Actalis S.p.A./03358520967, L=Milan, C=IT, Key type:RSA, Length:4096, Cert Id:1729119956, Valid from:9/22/11, 7:22 AM, Valid until:9/22/30, 7:22 AM
2024.05.09 19:53:41 DEBUG app[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=AddTrust External CA Root, OU=AddTrust External TTP Network, O=AddTrust AB, C=SE, Issuer:CN=AddTrust External CA Root, OU=AddTrust External TTP Network, O=AddTrust AB, C=SE, Key type:RSA, Length:2048, Cert Id:3968614624, Valid from:5/30/00, 6:48 AM, Valid until:5/30/20, 6:48 AM

... after couple of line, these errors are seen

2024.05.09 19:53:42 DEBUG app[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:401ac46421b31321030ebbe4121ac51d, Subject:CN=VeriSign Universal Root Certification Authority, OU="(c) 2008 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Issuer:CN=VeriSign Universal Root Certification Authority, OU="(c) 2008 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:2318285810, Valid from:4/1/08, 8:00 PM, Valid until:12/1/37, 6:59 PM
2024.05.09 19:53:42 DEBUG app[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:50946cec18ead59c4dd597ef758fa0ad, Subject:CN=XRamp Global Certification Authority, O=XRamp Security Services Inc, OU=www.xrampsecurity.com, C=US, Issuer:CN=XRamp Global Certification Authority, O=XRamp Security Services Inc, OU=www.xrampsecurity.com, C=US, Key type:RSA, Length:2048, Cert Id:3342493210, Valid from:11/1/04, 12:14 PM, Valid until:1/1/35, 12:37 AM
2024.05.09 19:53:42 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 1] start execution
2024.05.09 19:53:42 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.09 19:53:42 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:49627
2024.05.09 19:53:42 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.09 19:53:42 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 1] Request connection for {}->http://127.0.0.1:49627
2024.05.09 19:53:42 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:49627][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.09 19:53:42 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request failed
java.net.ConnectException: Connection refused: getsockopt
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:682)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:1583)
2024.05.09 19:53:42 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 1] connection request failed
2024.05.09 19:53:42 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:49627/] failed
java.net.ConnectException: Connection refused: getsockopt
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:682)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:1583)
2024.05.09 19:53:42 DEBUG app[][o.e.c.RestClient] added [[host=http://127.0.0.1:49627]] to blacklist
2024.05.09 19:53:43 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 2] start execution
2024.05.09 19:53:43 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2024.05.09 19:53:43 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://127.0.0.1:49627
2024.05.09 19:53:43 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2024.05.09 19:53:43 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 2] Request connection for {}->http://127.0.0.1:49627
2024.05.09 19:53:43 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://127.0.0.1:49627][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2024.05.09 19:53:43 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request failed
java.net.ConnectException: Connection refused: getsockopt
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:682)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
	at java.base/java.lang.Thread.run(Thread.java:1583)
2024.05.09 19:53:43 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 2] connection request failed
2024.05.09 19:53:43 DEBUG app[][o.e.c.RestClient] request [GET http://127.0.0.1:49627/] failed

Welcome :slight_smile:

you need to run the Sonarqube server with Java 17, see

Gilbert

Hi, what if I’m already running with correct java and I still get this error?

@Jaka_Luthar Instead of waking up old threads, I encourage you to start a new thread with the logs that show the issue you’re facing.

Will do. But my error is identical is the OPs.