Hi, I am a complete beginner to SonarQube. I am running version 9.9.1.69595 on Fedora 40. I have been trying to start sonarqube.service , but the system keeps saying it is waiting for ElasticSearch to be up and running. Upon opening the ElasticSearch log, I found that ElasticSearch keeps stopping and closing immediately after starting successfully.
I have made sure that the java version I am running (Java 17) is compatible with Sonarqube, that vm.max_map_count is 262144. I have also increased the memory dedicated to ElasticSearch. You will find below my sonarqube.service file, the ElasticSearch log, and the result of systemctl status sonarqube. Thank you very much
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/usr/bin/java -Xms512m -Xmx2048m -Djava.net.preferIPv4Stack=true -jar /opt/sonarqube/lib/sonar-application-9.9.1.69595.jar
User=sonarqube
Group=sonarqube
Restart=always
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
2024.11.03 15:35:05 DEBUG es[][i.n.u.NetUtil] /proc/sys/net/core/somaxconn: 4096
2024.11.03 15:35:05 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 5e:47:0e:ff:fe:1f:1c:da (auto-detected)
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 8
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2024.11.03 15:35:05 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
2024.11.03 15:35:05 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2024.11.03 15:35:05 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2024.11.03 15:35:05 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2024.11.03 15:35:05 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:42089}
2024.11.03 15:35:05 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:42089}, bound_addresses {127.0.0.1:42089}
2024.11.03 15:35:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [200ms]; wrote full state with [0] indices
2024.11.03 15:35:05 INFO es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks
2024.11.03 15:35:05 DEBUG es[][o.e.d.SeedHostsResolver] using max_concurrent_resolvers [10], resolver timeout [5s]
2024.11.03 15:35:05 INFO es[][o.e.c.c.Coordinator] cluster UUID [HiLKkiLNR46yraJTFH5psw]
2024.11.03 15:35:05 DEBUG es[][o.e.t.TransportService] now accepting incoming requests
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.Coordinator] startInitialJoin: coordinator becoming CANDIDATE in term 284 (was null, lastKnownLeader was [Optional.empty])
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=59, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.11.03 15:35:05 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s]
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=59, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=515, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={}, electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=284}, isClosed=false} requesting pre-votes from [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}]
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.PreVoteCollector] PreVotingRound{preVotesReceived={{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}=PreVoteResponse{currentTerm=284, lastAcceptedTerm=284, lastAcceptedVersion=568}}, electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, currentTerm=284}, isClosed=false} added PreVoteResponse{currentTerm=284, lastAcceptedTerm=284, lastAcceptedVersion=568} from {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, starting election
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.Coordinator] starting election with StartJoinRequest{term=285,node={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.Coordinator] joinLeaderInTerm: for [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}] with term 285
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.CoordinationState] handleStartJoin: leaving term [284] due to StartJoinRequest{term=285,node={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.JoinHelper] attempting to join {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=284, optionalJoin=Optional[Join{term=285, lastAcceptedTerm=284, lastAcceptedVersion=568, sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.JoinHelper] successful response to StartJoinRequest{term=285,node={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}} from {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: added join Join{term=285, lastAcceptedTerm=284, lastAcceptedVersion=568, sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}} from [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}] for election, electionWon=true lastAcceptedTerm=284 lastAcceptedVersion=568
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.CoordinationState] handleJoin: election won in term [285] with VoteCollection{votes=[t0ocUg0PQLmd_xnR7WzvVA], joins=[Join{term=285, lastAcceptedTerm=284, lastAcceptedVersion=568, sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.Coordinator] handleJoinRequest: coordinator becoming LEADER in term 285 (was CANDIDATE, lastKnownLeader was [Optional.empty])
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.JoinHelper] received a join request for an existing node [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [13ms] to compute cluster state update for [elected-as-master ([1] nodes joined)[{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [569], source [elected-as-master ([1] nodes joined)[{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.11.03 15:35:05 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 285, version: 569, delta: master node changed {previous [], current [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}]}
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [569]
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [569] with size [344]
2024.11.03 15:35:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote full state with [0] indices
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=285, version=569}]: execute
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [569], source [Publication{term=285, version=569}]
2024.11.03 15:35:05 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}]}, term: 285, version: 569, reason: Publication{term=285, version=569}
2024.11.03 15:35:05 DEBUG es[][o.e.c.NodeConnectionsService] connecting to {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}
2024.11.03 15:35:05 DEBUG es[][o.e.c.NodeConnectionsService] connected to {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 569
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 569
2024.11.03 15:35:05 DEBUG es[][o.e.i.SystemIndexManager] Waiting until state has been recovered
2024.11.03 15:35:05 DEBUG es[][o.e.c.l.NodeAndClusterIdStateListener] Received cluster state update. Setting nodeId=[t0ocUg0PQLmd_xnR7WzvVA] and clusterUuid=[HiLKkiLNR46yraJTFH5psw]
2024.11.03 15:35:05 DEBUG es[][o.e.g.GatewayService] performing state recovery...
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=285, version=569}]: took [0s] done applying updated cluster state (version: 569, uuid: FmBvwo8rSl-Inb7zUIykrA)
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.JoinHelper] releasing [1] connections on successful cluster state application
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=285, version=569}
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.JoinHelper] successfully joined {sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube} with JoinRequest{sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, minimumTerm=284, optionalJoin=Optional[Join{term=285, lastAcceptedTerm=284, lastAcceptedVersion=568, sourceNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}, targetNode={sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw}{rack_id=sonarqube}}]}
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [1ms] to notify listeners on successful publication of cluster state (version: 569, uuid: FmBvwo8rSl-Inb7zUIykrA) for [elected-as-master ([1] nodes joined)[{sonarqube}{t0ocUg0PQLmd_xnR7WzvVA}{YWyyNt0WS-eXv1MUBDTyOw}{127.0.0.1}{127.0.0.1:42089}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [cluster_reroute(post-join reroute)]
2024.11.03 15:35:05 DEBUG es[][o.e.h.AbstractHttpServerTransport] Bound http to address {127.0.0.1:9001}
2024.11.03 15:35:05 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2024.11.03 15:35:05 INFO es[][o.e.n.Node] started
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [8ms] to compute cluster state update for [cluster_reroute(post-join reroute)]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [cluster_reroute(post-join reroute)]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [update snapshot after shards started [false] or node configuration changed [true]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to compute cluster state update for [update snapshot after shards started [false] or node configuration changed [true]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on unchanged cluster state for [update snapshot after shards started [false] or node configuration changed [true]]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] executing cluster state update for [local-gateway-elected-state]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [1ms] to compute cluster state update for [local-gateway-elected-state]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [570], source [local-gateway-elected-state]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [570]
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.PublicationTransportHandler] received full cluster state version [570] with size [296]
2024.11.03 15:35:05 DEBUG es[][o.e.g.PersistedClusterStateService] writing cluster state took [0ms]; wrote global metadata [false] and metadata for [0] indices and skipped [0] unchanged indices
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=285, version=570}]: execute
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [570], source [Publication{term=285, version=570}]
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] applying settings from cluster state with version 570
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 570
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 570
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.ClusterApplierService] processing [Publication{term=285, version=570}]: took [0s] done applying updated cluster state (version: 570, uuid: 7CiBLhEbSBudRu5s1cUCjw)
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.C.CoordinatorPublication] publication ended successfully: Publication{term=285, version=570}
2024.11.03 15:35:05 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
2024.11.03 15:35:05 DEBUG es[][o.e.c.s.MasterService] took [0s] to notify listeners on successful publication of cluster state (version: 570, uuid: 7CiBLhEbSBudRu5s1cUCjw) for [local-gateway-elected-state]
2024.11.03 15:35:05 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2024.11.03 15:35:05 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2024.11.03 15:35:05 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@2ec4e0ae
2024.11.03 15:35:05 DEBUG es[][i.n.h.c.c.Brotli] brotli4j not in the classpath; Brotli support will be unavailable.
2024.11.03 15:35:05 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled
2024.11.03 15:35:05 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled
2024.11.03 15:35:05 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled
2024.11.03 15:35:05 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled
2024.11.03 15:35:05 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.delayedQueue.ratio: disabled
2024.11.03 15:35:05 DEBUG es[][o.e.c.c.ElectionSchedulerFactory] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=515, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election
2024.11.03 15:35:09 INFO es[][o.e.n.Node] stopping ...
2024.11.03 15:35:09 INFO es[][o.e.n.Node] stopped
2024.11.03 15:35:09 INFO es[][o.e.n.Node] closing ...
2024.11.03 15:35:09 INFO es[][o.e.n.Node] closed
sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; preset: disabled)
Drop-In: /usr/lib/systemd/system/service.d
└─10-timeout-abort.conf
Active: activating (start) since Sun 2024-11-03 15:34:32 +07; 8s ago
Cntrl PID: 103491 (java)
Tasks: 69 (limit: 18858)
Memory: 1.2G (peak: 1.2G)
CPU: 23.894s
CGroup: /system.slice/sonarqube.service
├─103491 /usr/bin/java -Xms512m -Xmx2048m -Djava.net.preferIPv4Stack=true -jar /opt/sonarqube/lib/sonar-application-9.9.1.69595.jar
└─103511 /usr/lib/jvm/java-17-openjdk-17.0.13.0.11-1.fc40.x86_64/bin/java -XX:+UseG1GC -Djava.io.tmpdir=/opt/sonarqube/temp -XX:ErrorFile=/opt/s>
Nov 03 15:34:32 fedora systemd[1]: sonarqube.service: Scheduled restart job, restart counter is at 1.
Nov 03 15:34:32 fedora systemd[1]: Starting sonarqube.service - SonarQube service...
Nov 03 15:34:32 fedora java[103491]: 2024.11.03 15:34:32 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
Nov 03 15:34:32 fedora java[103491]: 2024.11.03 15:34:32 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:42>
Nov 03 15:34:32 fedora java[103491]: 2024.11.03 15:34:32 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsea>
Nov 03 15:34:33 fedora java[103491]: 2024.11.03 15:34:33 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running