Sonarqube won't start up while waiting for elasticsearch to be up and running

  • Which versions are you using: 2025.2.0.105476
  • how is SonarQube deployed: Helm
  • what are you trying to achieve: sonarqube up and running
  • what have you tried so far to achieve this: restarted the deployment, checked file storage, database status, reviewed log messages, checked running processes

Recently our sonarqube instance stopped working while I was away. From what I understand nothing was changed on the service. In reviewing the logs I do not see anything other than harmless INFO messages and SQB waiting for elastic search to start up.

Here is the sonar.log output:

2025.05.12 11:27:06 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2025.05.12 11:27:06 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:{}]
2025.05.12 11:27:06 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /usr/local/openjdk-17/bin/java -Xms4m -Xmx64m -XX:+UseSerialGC -Dcli.name=server -Dcli.script=./bin/elasticsearch -Dcli.libs=lib/tools/server-cli -Des.path.home=/opt/sonarqube/elasticsearch -Des.path.conf=/opt/sonarqube/temp/conf/es -Des.distribution.type=tar -cp /opt/sonarqube/elasticsearch/lib/*:/opt/sonarqube/elasticsearch/lib/cli-launcher/* org.elasticsearch.launcher.CliToolLauncher
2025.05.12 11:27:06 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running

Here is the es.log output:

2025.05.12 11:27:18 INFO  es[][o.e.n.NativeAccess] Using [jna] native provider and native methods for [Linux]
2025.05.12 11:27:21 INFO  es[][o.e.n.Node] version[8.16.3], pid[81], build[tar/2eb78bceb86e182dc8f45ab76a704b1bfd352c9d/2025-01-10T10:08:28.587559883Z], OS[Linux/5.15.0-1068-azure/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/17.0.2/17.0.2+8-86]
2025.05.12 11:27:21 INFO  es[][o.e.n.Node] JVM home [/usr/local/openjdk-17], using bundled JDK [false]
2025.05.12 11:27:21 INFO  es[][o.e.n.Node] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, -XX:ReplayDataFile=logs/replay_pid%p.log, -Des.distribution.type=tar, -XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=/opt/sonarqube/logs/es_hs_err_pid%p.log, -Xlog:disable, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Dnode.store.allow_mmap=false, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, --module-path=/opt/sonarqube/elasticsearch/lib, --add-modules=jdk.net, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server]
2025.05.12 11:27:21 INFO  es[][o.e.n.Node] Default Locale [en]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [rest-root]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-core]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-redact]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-security]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-geoip-enterprise-downloader]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-esql-core]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-apm-data]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [old-lucene-versions]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [counted-keyword]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [logsdb]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-inference]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [dot-prefix-validation]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [health-shards-availability]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [x-pack-otel-data]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [aggregations]
2025.05.12 11:27:26 INFO  es[][o.e.p.PluginsService] loaded module [apm]
2025.05.12 11:27:31 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/opt/sonarqube/data (//obfuscated.url.that.we.use/sonar-data)]], net usable_space [4.9tb], net total_space [5tb], types [cifs]
2025.05.12 11:27:31 INFO  es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true]
2025.05.12 11:27:35 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [WsVNxhh7TR6MKYN-tgz13g], cluster name [sonarqube], roles [ingest, data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold]
2025.05.12 11:27:44 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2025.05.12 11:27:44 INFO  es[][o.e.f.FeatureService] Registered local node features [desired_node.version_deprecated, features_supported, file_settings, flattened.ignore_above_support, geoip.downloader.database.configuration, get_database_configuration_action.multi_node, health.extended_repository_indicator, knn_retriever_supported, license-trial-independent-version, mapper.boolean_dimension, mapper.flattened.ignore_above_with_arrays_support, mapper.ignore_above_index_level_setting, mapper.index_sorting_on_nested, mapper.keyword_dimension_ignore_above, mapper.keyword_normalizer_synthetic_source, mapper.pass_through_priority, mapper.query_index_mode, mapper.range.null_values_off_by_one_fix, mapper.segment_level_fields_stats, mapper.source.synthetic_source_copy_to_fix, mapper.source.synthetic_source_copy_to_inside_objects_fix, mapper.source.synthetic_source_fallback, mapper.source.synthetic_source_stored_fields_advance_fix, mapper.source.synthetic_source_with_copy_to_and_doc_values_false, mapper.subobjects_auto, mapper.subobjects_auto_fixes, mapper.synthetic_source_keep, mapper.track_ignored_source, mapper.vectors.bbq, mapper.vectors.bit_vectors, mapper.vectors.int4_quantization, put_database_configuration_action.ipinfo, random_reranker_retriever_supported, repositories.supports_usage_stats, rest.capabilities_action, rest.local_only_capabilities, retrievers_supported, routing.boolean_routing_path, routing.multi_value_routing_path, script.hamming, script.term_stats, search.vectors.k_param_supported, security.migration_framework, security.role_mapping_cleanup, security.roles_metadata_flattened, semantic_text.search_inference_id, simulate.component.template.substitutions, simulate.index.template.substitutions, simulate.mapping.validation, simulate.mapping.validation.templates, standard_retriever_supported, stats.include_disk_thresholds, text_similarity_reranker_retriever_composition_supported, text_similarity_reranker_retriever_supported, tsdb.ts_routing_hash_doc_value_parse_byte_ref, unified_highlighter_matched_fields, usage.data_tiers.precalculate_stats]
2025.05.12 11:27:44 INFO  es[][o.e.c.m.DataStreamGlobalRetentionSettings] Updated default factory retention to [null]
2025.05.12 11:27:44 INFO  es[][o.e.c.m.DataStreamGlobalRetentionSettings] Updated max factory retention to [null]
2025.05.12 11:27:45 INFO  es[][o.e.x.s.Security] Security is disabled
2025.05.12 11:27:45 INFO  es[][o.e.x.a.APMPlugin] APM ingest plugin is enabled
2025.05.12 11:27:45 INFO  es[][o.e.x.c.t.YamlTemplateRegistry] apm index template registry is enabled
2025.05.12 11:27:46 INFO  es[][o.e.x.o.OTelPlugin] OTel ingest plugin is enabled
2025.05.12 11:27:46 INFO  es[][o.e.x.c.t.YamlTemplateRegistry] OpenTelemetry index template registry is enabled
2025.05.12 11:27:46 INFO  es[][o.e.t.a.APM] Sending apm metrics is disabled
2025.05.12 11:27:46 INFO  es[][o.e.t.a.APM] Sending apm tracing is disabled
2025.05.12 11:27:47 INFO  es[][o.e.t.n.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=1mb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=4mb, heap_size=512mb}]
2025.05.12 11:27:47 INFO  es[][o.e.d.DiscoveryModule] using discovery type [single-node] and seed hosts providers [settings]
2025.05.12 11:27:49 INFO  es[][o.e.n.Node] initialized
2025.05.12 11:27:49 INFO  es[][o.e.n.Node] starting ...
2025.05.12 11:27:49 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:34465}, bound_addresses {127.0.0.1:34465}
2025.05.12 11:27:52 INFO  es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks
2025.05.12 11:27:52 INFO  es[][o.e.c.c.ClusterBootstrapService] this node is locked into cluster UUID [ZE5dFAygR_-Lvd_9zZuCwA] and will not attempt further cluster bootstrapping
2025.05.12 11:27:52 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined in term 292)[_FINISH_ELECTION_, {sonarqube}{WsVNxhh7TR6MKYN-tgz13g}{4IPpUlg-Q4W202EA5FBDDw}{sonarqube}{127.0.0.1}{127.0.0.1:34465}{cdfhilmrstw}{8.16.3}{7000099-8518000} completing election], term: 292, version: 3366, delta: master node changed {previous [], current [{sonarqube}{WsVNxhh7TR6MKYN-tgz13g}{4IPpUlg-Q4W202EA5FBDDw}{sonarqube}{127.0.0.1}{127.0.0.1:34465}{cdfhilmrstw}{8.16.3}{7000099-8518000}]}
2025.05.12 11:27:53 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{WsVNxhh7TR6MKYN-tgz13g}{4IPpUlg-Q4W202EA5FBDDw}{sonarqube}{127.0.0.1}{127.0.0.1:34465}{cdfhilmrstw}{8.16.3}{7000099-8518000}]}, term: 292, version: 3366, reason: Publication{term=292, version=3366}
2025.05.12 11:27:53 INFO  es[][o.e.c.c.NodeJoinExecutor] node-join: [{sonarqube}{WsVNxhh7TR6MKYN-tgz13g}{4IPpUlg-Q4W202EA5FBDDw}{sonarqube}{127.0.0.1}{127.0.0.1:34465}{cdfhilmrstw}{8.16.3}{7000099-8518000}] with reason [completing election]
2025.05.12 11:27:53 INFO  es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2025.05.12 11:27:53 INFO  es[][o.e.n.Node] started {sonarqube}{WsVNxhh7TR6MKYN-tgz13g}{4IPpUlg-Q4W202EA5FBDDw}{sonarqube}{127.0.0.1}{127.0.0.1:34465}{cdfhilmrstw}{8.16.3}{7000099-8518000}{rack_id=sonarqube, xpack.installed=true}
2025.05.12 11:27:54 INFO  es[][o.e.l.ClusterStateLicenseService] license [6019a0a9-1cf6-4feb-8bdf-32a33dc58aea] mode [basic] - valid
2025.05.12 11:27:54 INFO  es[][o.e.c.f.AbstractFileWatchingService] starting file watcher ...
2025.05.12 11:27:54 INFO  es[][o.e.c.f.AbstractFileWatchingService] file settings service up and running [tid=34]
2025.05.12 11:27:54 INFO  es[][o.e.r.s.FileSettingsService] setting file [/opt/sonarqube/temp/conf/es/operator/settings.json] not found, initializing [file_settings] as empty
2025.05.12 11:27:54 INFO  es[][o.e.g.GatewayService] recovered [6] indices into cluster_state
2025.05.12 11:27:55 INFO  es[][o.e.h.n.s.HealthNodeTaskExecutor] Node [{sonarqube}{WsVNxhh7TR6MKYN-tgz13g}] is selected as the current health node.

Not seeing any errors or issues, it just looks like sonarqube waits forever for elasticsearch to be in a ready state. Elasticsearch appears to be running fine from what I can tell in the logs.

The only thing I can think to do is up the log levels to see if we can see more information.

After setting the log levels to DEBUG and reviewing the logs these were the only things that stuck out to me:

es.log shows an attempt to run a task that cancelled. Subsequent log messages say that the healthcheck completed, so I’m not sure if this is a red herring or not.

2025.05.12 12:02:36 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=96, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=63, minTranslogGeneration=96, trimmedAboveSeqNo=-2}
2025.05.12 12:02:36 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=96, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=63, minTranslogGeneration=96, trimmedAboveSeqNo=-2}
2025.05.12 12:02:37 DEBUG es[][o.e.a.s.m.TransportMasterNodeAction] unexpected exception during publication
org.elasticsearch.tasks.TaskCancelledException: task cancelled [http channel [Netty4HttpChannel{localAddress=/127.0.0.1:9001, remoteAddress=/127.0.0.1:54788}] closed]
	at org.elasticsearch.tasks.CancellableTask.getTaskCancelledException(CancellableTask.java:122) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.tasks.CancellableTask.ensureNotCancelled(CancellableTask.java:92) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction.lambda$sendResponse$2(TransportClusterHealthAction.java:328) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:356) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction.sendResponse(TransportClusterHealthAction.java:327) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction$3.lambda$onTimeout$1(TransportClusterHealthAction.java:267) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:956) ~[elasticsearch-8.16.3.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
	at java.lang.Thread.run(Thread.java:833) ~[?:?]
2025.05.12 12:02:37 DEBUG es[][r.suppressed] path: /_cluster/health, params: {master_timeout=30s, level=cluster, wait_for_status=yellow, timeout=30s}, status: 400
org.elasticsearch.tasks.TaskCancelledException: task cancelled [http channel [Netty4HttpChannel{localAddress=/127.0.0.1:9001, remoteAddress=/127.0.0.1:54788}] closed]
	at org.elasticsearch.tasks.CancellableTask.getTaskCancelledException(CancellableTask.java:122) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.tasks.CancellableTask.ensureNotCancelled(CancellableTask.java:92) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction.lambda$sendResponse$2(TransportClusterHealthAction.java:328) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:356) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction.sendResponse(TransportClusterHealthAction.java:327) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction$3.lambda$onTimeout$1(TransportClusterHealthAction.java:267) ~[elasticsearch-8.16.3.jar:?]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:956) ~[elasticsearch-8.16.3.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
	at java.lang.Thread.run(Thread.java:833) ~[?:?]
2025.05.12 12:02:37 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3ys], userData[{es_version=8518000, history_uuid=WEr5qcOxQnu9_hHv5MNXHg, local_checkpoint=3257, max_seq_no=3257, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=3258, translog_uuid=0Z_JYnyiRMCLFD4RpTbPFQ}]}], last commit [CommitPoint{segment[segments_3ys], userData[{es_version=8518000, history_uuid=WEr5qcOxQnu9_hHv5MNXHg, local_checkpoint=3257, max_seq_no=3257, max_unsafe_auto_id_timestamp=-1, min_retained_seq_no=3258, translog_uuid=0Z_JYnyiRMCLFD4RpTbPFQ}]}]
2025.05.12 12:02:37 DEBUG es[][o.e.i.f.p.AbstractIndexOrdinalsFieldData] global-ordinals [join_components#auth][9] took [445.2micros]

sonar.log shows it is attempting to connect to the default 9001 port, timing out, adding to a blocklist, and then subsequent connection attempts are returned with a 408 connection timed out.

2025.05.12 12:02:06 DEBUG app[][o.e.c.RestClient] request [GET http://localhost:9001/] returned [HTTP/1.1 200 OK]
2025.05.12 12:02:06 DEBUG app[][o.e.c.RestClient] removed [[host=http://localhost:9001]] from blacklist
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 512; pos: 512; completed: true]
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 454] start execution
2025.05.12 12:02:06 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2025.05.12 12:02:06 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://localhost:9001
2025.05.12 12:02:06 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 454] Request connection for {}->http://localhost:9001
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://localhost:9001][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][r:r]: Set timeout 0
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 454] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]}
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][r:r]: Set attribute http.nio.exchange-handler
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][rw:r]: Event set [w]
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 454] Attempt 1 to execute request
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 454] Target auth state: UNCHALLENGED
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 454] Proxy auth state: UNCHALLENGED
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][rw:w]: Set timeout 30000
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> X-Elastic-Client-Meta: es=7.17.27,jv=17,t=7.17.27,hc=4.1.4,kt=2.0
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Content-Length: 0
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Host: localhost:9001
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> Connection: Keep-Alive
2025.05.12 12:02:06 DEBUG app[][o.a.http.headers] http-outgoing-0 >> User-Agent: elasticsearch-java/7.17.27 (Java/17.0.2)
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][rw:w]: Event set [w]
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 454] Request completed
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][rw:w]: 287 bytes written
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "X-Elastic-Client-Meta: es=7.17.27,jv=17,t=7.17.27,hc=4.1.4,kt=2.0[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "Content-Length: 0[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "Host: localhost:9001[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "User-Agent: elasticsearch-java/7.17.27 (Java/17.0.2)[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.apache.http.wire] http-outgoing-0 >> "[\r][\n]"
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready
2025.05.12 12:02:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][r:w]: Event cleared [w]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [ACTIVE] Timeout
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[ACTIVE][r:w]: Shutdown
2025.05.12 12:02:36 DEBUG app[][o.e.c.RestClient] request [GET http://localhost:9001/_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow] failed
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: 30,000 milliseconds timeout on connection http-outgoing-0 [ACTIVE]
	at org.apache.http.concurrent.BasicFuture.getResult(BasicFuture.java:71)
	at org.apache.http.concurrent.BasicFuture.get(BasicFuture.java:84)
	at org.apache.http.impl.nio.client.FutureWrapper.get(FutureWrapper.java:70)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288)
	at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2699)
	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2171)
	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2137)
	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2105)
	at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:151)
	at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:79)
	at org.sonar.application.process.EsManagedProcess.checkStatus(EsManagedProcess.java:92)
	at org.sonar.application.process.EsManagedProcess.checkOperational(EsManagedProcess.java:84)
	at org.sonar.application.process.EsManagedProcess.isOperational(EsManagedProcess.java:62)
	at org.sonar.application.process.ManagedProcessHandler.refreshState(ManagedProcessHandler.java:220)
	at org.sonar.application.process.ManagedProcessHandler$EventWatcher.run(ManagedProcessHandler.java:285)
Caused by: java.net.SocketTimeoutException: 30,000 milliseconds timeout on connection http-outgoing-0 [ACTIVE]
	at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387)
	at org.apache.http.impl.nio.client.InternalRequestExecutor.timeout(InternalRequestExecutor.java:116)
	at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
	at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
	at org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
	at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:261)
	at org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:502)
	at org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:211)
	at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
	at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
	at java.base/java.lang.Thread.run(Thread.java:833)
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 454] connection aborted
2025.05.12 12:02:36 DEBUG app[][o.e.c.RestClient] added [[host=http://localhost:9001]] to blacklist
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-0 127.0.0.1:54788<->127.0.0.1:9001[CLOSED][]: Shutdown
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-0 [CLOSED]: Disconnected
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] start execution
2025.05.12 12:02:36 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2025.05.12 12:02:36 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://localhost:9001
2025.05.12 12:02:36 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] Request connection for {}->http://localhost:9001
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://localhost:9001][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-1][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] Connection allocated: CPoolProxy{http-outgoing-1 [ACTIVE]}
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:]: Set attribute http.nio.exchange-handler
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:]: Event set [w]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:]: Set timeout 0
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE]: Connected
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] Start connection routing
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] route completed
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Connection route established
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Attempt 1 to execute request
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Target auth state: UNCHALLENGED
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Proxy auth state: UNCHALLENGED
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:]: Set timeout 30000
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> X-Elastic-Client-Meta: es=7.17.27,jv=17,t=7.17.27,hc=4.1.4,kt=2.0
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Content-Length: 0
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Host: localhost:9001
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Connection: Keep-Alive
2025.05.12 12:02:36 DEBUG app[][o.a.http.headers] http-outgoing-1 >> User-Agent: elasticsearch-java/7.17.27 (Java/17.0.2)
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:]: Event set [w]
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Request completed
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:w]: 287 bytes written
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "X-Elastic-Client-Meta: es=7.17.27,jv=17,t=7.17.27,hc=4.1.4,kt=2.0[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "Content-Length: 0[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "Host: localhost:9001[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "Connection: Keep-Alive[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "User-Agent: elasticsearch-java/7.17.27 (Java/17.0.2)[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.apache.http.wire] http-outgoing-1 >> "[\r][\n]"
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE] Request ready
2025.05.12 12:02:36 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:w]: Event cleared [w]
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:r]: 545 bytes read
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "HTTP/1.1 408 Request Timeout[\r][\n]"
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "X-elastic-product: Elasticsearch[\r][\n]"
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "content-type: application/json[\r][\n]"
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "content-length: 426[\r][\n]"
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "[\r][\n]"
2025.05.12 12:03:06 DEBUG app[][o.apache.http.wire] http-outgoing-1 << "{"cluster_name":"sonarqube","status":"red","timed_out":true,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":21,"active_shards":21,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2,"unassigned_primary_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":91.30434782608695}"
2025.05.12 12:03:06 DEBUG app[][o.a.http.headers] http-outgoing-1 << HTTP/1.1 408 Request Timeout
2025.05.12 12:03:06 DEBUG app[][o.a.http.headers] http-outgoing-1 << X-elastic-product: Elasticsearch
2025.05.12 12:03:06 DEBUG app[][o.a.http.headers] http-outgoing-1 << content-type: application/json
2025.05.12 12:03:06 DEBUG app[][o.a.http.headers] http-outgoing-1 << content-length: 426
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE(426)] Response received
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Response received HTTP/1.1 408 Request Timeout
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE(426)] Input ready
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Consume content
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] Connection can be kept alive indefinitely
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 455] Response processed
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 455] releasing connection
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-1][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-1][route: {}->http://localhost:9001] can be kept alive indefinitely
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:r]: Set timeout 0
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-1][route: {}->http://localhost:9001][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:03:06 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE] [content length: 426; pos: 426; completed: true]
2025.05.12 12:03:06 DEBUG app[][o.e.c.RestClient] request [GET http://localhost:9001/_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow] returned [HTTP/1.1 408 Request Timeout]
2025.05.12 12:03:06 DEBUG app[][o.e.c.RestClient] removed [[host=http://localhost:9001]] from blacklist
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 456] start execution
2025.05.12 12:03:07 DEBUG app[][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default
2025.05.12 12:03:07 DEBUG app[][o.a.h.c.p.RequestAuthCache] Re-using cached 'basic' auth scheme for http://localhost:9001
2025.05.12 12:03:07 DEBUG app[][o.a.h.c.p.RequestAuthCache] No credentials for preemptive authentication
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 456] Request connection for {}->http://localhost:9001
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://localhost:9001][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:r]: Set timeout 0
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-1][route: {}->http://localhost:9001][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.InternalHttpAsyncClient] [exchange: 456] Connection allocated: CPoolProxy{http-outgoing-1 [ACTIVE]}
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][r:r]: Set attribute http.nio.exchange-handler
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:r]: Event set [w]
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.InternalIODispatch] http-outgoing-1 [ACTIVE] Request ready
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 456] Attempt 1 to execute request
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 456] Target auth state: UNCHALLENGED
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 456] Proxy auth state: UNCHALLENGED
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:w]: Set timeout 30000
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> GET /_cluster/health?master_timeout=30s&level=cluster&timeout=30s&wait_for_status=yellow HTTP/1.1
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> X-Elastic-Client-Meta: es=7.17.27,jv=17,t=7.17.27,hc=4.1.4,kt=2.0
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Content-Length: 0
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Host: localhost:9001
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> Connection: Keep-Alive
2025.05.12 12:03:07 DEBUG app[][o.a.http.headers] http-outgoing-1 >> User-Agent: elasticsearch-java/7.17.27 (Java/17.0.2)
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.ManagedNHttpClientConnectionImpl] http-outgoing-1 127.0.0.1:39612<->127.0.0.1:9001[ACTIVE][rw:w]: Event set [w]
2025.05.12 12:03:07 DEBUG app[][o.a.h.i.n.c.MainClientExec] [exchange: 456] Request completed

hello @stantheadmin, thanks a lot for the detail post and logs info.

Can i ask you to share a bit more your helm setup and value file ? (striped from any sensitive data)

When in Debug mode the blacklist mechanism is expected, and as you said nothing seems to be fundamentally wrong on those logs for now.

The first thing i would like to check is if you enabled persistency ?

Hey Jeremy - thanks for replying and reviewing this! We deploy this with Azure DevOps using the deploy to Kubernetes agent job. It will pull an image from our own registry (for which I have obfuscated the FQDN) in the yaml below. That yaml is pretty straightforward with pod config, a few file mounts, and a bit of networking.

Where could I check that I have enabled persistency? I assume that would be in sonar.properties or something like that.

Again, thank you so much for replying! I appreciate any help we can get.

# vi pods.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: appdev
  name: sqb
  annotations:
    deployment.kubernetes.io/max-replicas: "1"
spec:
  selector:
    matchLabels:
      app: sqb
  replicas: 1
  template:
    metadata:
      labels:
        app: sqb
    spec:
      containers:
      - name: sonarqube
        image: image.repo.fqdn/sonarqube:2025.2.0.105476
        ports:
        - containerPort: 9006
        volumeMounts:
          - name: sonarconf
            mountPath: /opt/sonarqube/conf
          - name: sonardata
            mountPath: /opt/sonarqube/data
        env:
          - name: sonar.search.javaAdditionalOpts
            value: -Dnode.store.allow_mmap=false
        resources:
          limits:
            cpu: 800m
            memory: "8Gi"
          requests:
            cpu: 400m
            memory: "2Gi"
      dnsPolicy: "Default"
      volumes:
      - name: sonarconf
        persistentVolumeClaim:
          claimName: sonarconf
      - name: sonardata
        persistentVolumeClaim:
          claimName: sonardata
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: sonarconf
spec:
  capacity:
    storage: 5Ti
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile
  azureFile:
    secretName: appdevfileshare
    shareName: sonar-conf
    readOnly: false
  mountOptions:
  - uid=999
  - gid=999
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: sonardata
spec:
  capacity:
    storage: 5Ti
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile
  azureFile:
    secretName: appdevfileshare
    shareName: sonar-data
    readOnly: false
  mountOptions:
  - uid=999
  - gid=999
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarconf
  namespace: appdev
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile
  resources:
    requests:
      storage: 5Ti
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonardata
  namespace: appdev
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile
  resources:
    requests:
      storage: 5Ti
---
apiVersion: v1
kind: Service
metadata:
  namespace: appdev
  name: sqblb
spec:
  type: ClusterIP
  ports: 
  - protocol: TCP
    port: 80
    targetPort: 9006
  selector:
    app: sqb

Since our helm deployment is a custom one, and we’re using persistent volume claims/file stores in our cloud provider, I’d say that what gets written to sonarqube conf and data are both persistent. I’m not sure if that is exactly the same as the helm persistence toggle described here

I would say its equivalent yes, you can try to scale down your pods to 0, then delete the PVC, and reapply back your yamls.

I still have few questions/recommendations on the values themselves.

I see containerPort is 9006, did you change that directly from within your “custom” Docker image ? by default it is 9000.

Also, having node.store.allow_mmap=false is not recommended for production workload, and i think it is also not recommended in docker, but this might have changed i’ll double check.

Regarding request/limits, this is purely my opinion but for Java processes i tend to prefer having the same amount for request and limit, because java is quite bad as giving back memory that it allocated; but this should not be an issue here in our case.

The /opt/sonarqube/data persistency is not strictly needed, SonarQube only needs the database, everything else will be recrated-re-indexed at startup, if you dont have a very big SonarQube instance, the performance differences is marginal and this might simplify and actually make deployment/upgrade a bit more robust for ES.

1 Like

Jeremy - removing the PVC & volume mount did it! Our sonarqube instance is back up and running now.

As for your other questions I have a mix of answers. Most of what I know is second-hand as the person who put this in place is long gone.

  1. Yes, the custom docker file uses 9006 instead of the default. I’m not sure why though.
  2. We added that parameter 8 months ago when SQB wouldn’t start up and a forum post on stackoverflow indicated it would fix the issue (it did). So we left it in there.
  3. Easy enough to increase the request or lower the limit if needed.

Thanks again for your help! It is such a relief to have the service back up and running.

Once we get through our backlog of scans, I can look into removing node.store.allow_mmap=false parameter.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.