Stuck in a page resfresh loop during initial Login using admin admin

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
    sonarqube:latest, mc1arke/sonarqube-with-community-branch-plugin, sonarqube:lts-community, mc1arke/sonarqube-with-community-branch-plugin:latest, and some other images I cant remember

  • how is SonarQube deployed: zip, Docker, Helm
    Docker

  • what are you trying to achieve
    Login using admin admin

  • what have you tried so far to achieve this

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

I deployed sonarqube on docker and went to the login page but when entering admin admin, the page refreshes and does not go to the update password page. There has been one exception where I was successful in getting to the updating the password page and got the the dashboard but after trying to login again using the password I was once again stuck in an refresh loop and I don’t know what I did to get it to work that time and I haven been able to replicate a successful admin admin login since. Ive tried clearing cookies, history, restarting from scratch, using different versions, using a different database, reset the password back to admin in the database using the command from the documentation, different browsers, incognito mode, different computers, and have no success.

Here are the logs and developer tools:

es.log:

2024.03.09 21:19:58 INFO  es[][o.e.n.Node] version[7.17.8], pid[1605], build[default/tar/120eabe1c8a0cb2ae87cffc109a5b65d213e9df1/2022-12-02T17:33:09.727072865Z], OS[Linux/6.5.0-1014-aws/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/17.0.10/17.0.10+7]
2024.03.09 21:19:58 INFO  es[][o.e.n.Node] JVM home [/usr/lib/jvm/temurin-17-jdk-amd64]
2024.03.09 21:19:58 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=/opt/sonarqube/logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:<+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2024.03.09 21:19:59 INFO  es[][o.e.p.PluginsService] no plugins loaded
2024.03.09 21:19:59 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [25.6gb], net total_space [28.8gb], types [ext4]
2024.03.09 21:19:59 INFO  es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true]
2024.03.09 21:19:59 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [BSca3hYoQAO6-GbRgqBgZw], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]
2024.03.09 21:20:02 INFO  es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}]
2024.03.09 21:20:02 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2024.03.09 21:20:02 INFO  es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2024.03.09 21:20:03 INFO  es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2024.03.09 21:20:03 INFO  es[][o.e.n.Node] initialized
2024.03.09 21:20:03 INFO  es[][o.e.n.Node] starting ...
2024.03.09 21:20:03 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:35109}, bound_addresses {127.0.0.1:35109}
2024.03.09 21:20:03 INFO  es[][o.e.b.BootstrapChecks] explicitly enforcing bootstrap checks
2024.03.09 21:20:03 INFO  es[][o.e.c.c.Coordinator] setting initial configuration to VotingConfiguration{BSca3hYoQAO6-GbRgqBgZw}
2024.03.09 21:20:03 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{e2CL9pKaSQy3wq_t4FpabA}{127.0.0.1}{127.0.0.1:35109}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{e2CL9pKaSQy3wq_t4FpabA}{127.0.0.1}{127.0.0.1:35109}{cdfhimrsw}]}
2024.03.09 21:20:04 INFO  es[][o.e.c.c.CoordinationState] cluster UUID set to [O0zPi-7OQb6bthc-UjOhng]
2024.03.09 21:20:04 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{e2CL9pKaSQy3wq_t4FpabA}{127.0.0.1}{127.0.0.1:35109}{cdfhimrsw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
2024.03.09 21:20:04 INFO  es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2024.03.09 21:20:04 INFO  es[][o.e.n.Node] started
2024.03.09 21:20:04 INFO  es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
2024.03.09 21:20:14 INFO  es[][o.e.c.m.MetadataCreateIndexService] [metadatas] creating index, cause [api], templates [], shards [1]/[0]
2024.03.09 21:20:14 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metadatas][0]]]).
2024.03.09 21:20:14 INFO  es[][o.e.c.m.MetadataMappingService] [metadatas/uHKL701BQrajdsgHebeT-A] create_mapping [metadata]
2024.03.09 21:20:15 INFO  es[][o.e.c.m.MetadataCreateIndexService] [components] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 21:20:15 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[components][4]]]).
2024.03.09 21:20:15 INFO  es[][o.e.c.m.MetadataMappingService] [components/HUVnbarETl-0YLXPJLktSA] create_mapping [auth]
2024.03.09 21:20:15 INFO  es[][o.e.c.m.MetadataCreateIndexService] [projectmeasures] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 21:20:15 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[projectmeasures][4]]]).
2024.03.09 21:20:15 INFO  es[][o.e.c.m.MetadataMappingService] [projectmeasures/8hxCT7kYRpGs1Hwm3Yf8qg] create_mapping [auth]
2024.03.09 21:20:16 INFO  es[][o.e.c.m.MetadataCreateIndexService] [rules] creating index, cause [api], templates [], shards [2]/[0]
2024.03.09 21:20:16 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[rules][0]]]).
2024.03.09 21:20:16 INFO  es[][o.e.c.m.MetadataMappingService] [rules/9MF3Ca2DRZqpPoc_5MeA3A] create_mapping [rule]
2024.03.09 21:20:16 INFO  es[][o.e.c.m.MetadataCreateIndexService] [issues] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 21:20:16 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[issues][4]]]).
2024.03.09 21:20:16 INFO  es[][o.e.c.m.MetadataMappingService] [issues/MGQDJdVHSxydwBgbEvslRQ] create_mapping [auth]
2024.03.09 21:20:17 INFO  es[][o.e.c.m.MetadataCreateIndexService] [users] creating index, cause [api], templates [], shards [1]/[0]
2024.03.09 21:20:17 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[users][0]]]).
2024.03.09 21:20:17 INFO  es[][o.e.c.m.MetadataMappingService] [users/cYaC9ehgRhW5oKmYHi3UgA] create_mapping [user]
2024.03.09 21:20:17 INFO  es[][o.e.c.m.MetadataCreateIndexService] [views] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 21:20:17 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[views][4]]]).
2024.03.09 21:20:17 INFO  es[][o.e.c.m.MetadataMappingService] [views/dJGFFONyTgSBVEnY1IYplQ] create_mapping [view]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 21:20:42 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 22:12:56 INFO  es[][o.e.n.Node] stopping ...
2024.03.09 22:12:56 INFO  es[][o.e.n.Node] stopped
2024.03.09 22:12:56 INFO  es[][o.e.n.Node] closing ...
2024.03.09 22:12:56 INFO  es[][o.e.n.Node] closed
2024.03.09 22:13:37 INFO  es[][o.e.n.Node] version[7.17.8], pid[766], build[default/tar/120eabe1c8a0cb2ae87cffc109a5b65d213e9df1/2022-12-02T17:33:09.727072865Z], OS[Linux/6.5.0-1014-aws/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/17.0.10/17.0.10+7]
2024.03.09 22:13:37 INFO  es[][o.e.n.Node] JVM home [/usr/lib/jvm/temurin-17-jdk-amd64]
2024.03.09 22:13:37 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=/opt/sonarqube/logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2024.03.09 22:13:38 INFO  es[][o.e.p.PluginsService] no plugins loaded
2024.03.09 22:13:38 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [25.4gb], net total_space [28.8gb], types [ext4]
2024.03.09 22:13:38 INFO  es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true]
2024.03.09 22:13:39 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [BSca3hYoQAO6-GbRgqBgZw], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]
2024.03.09 22:13:43 INFO  es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}]
2024.03.09 22:13:43 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2024.03.09 22:13:43 INFO  es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2024.03.09 22:13:43 INFO  es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2024.03.09 22:13:43 INFO  es[][o.e.n.Node] initialized
2024.03.09 22:13:43 INFO  es[][o.e.n.Node] starting ...
2024.03.09 22:13:44 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:42143}, bound_addresses {127.0.0.1:42143}
2024.03.09 22:13:44 INFO  es[][o.e.c.c.Coordinator] cluster UUID [O0zPi-7OQb6bthc-UjOhng]
2024.03.09 22:13:44 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{XXJ_1RDGRiutXyWpajP-mw}{127.0.0.1}{127.0.0.1:42143}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 43, delta: master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{XXJ_1RDGRiutXyWpajP-mw}{127.0.0.1}{127.0.0.1:42143}{cdfhimrsw}]}
2024.03.09 22:13:44 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{XXJ_1RDGRiutXyWpajP-mw}{127.0.0.1}{127.0.0.1:42143}{cdfhimrsw}]}, term: 2, version: 43, reason: Publication{term=2, version=43}
2024.03.09 22:13:44 INFO  es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2024.03.09 22:13:44 INFO  es[][o.e.n.Node] started
2024.03.09 22:13:44 INFO  es[][o.e.g.GatewayService] recovered [7] indices into cluster_state
2024.03.09 22:13:46 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]).
2024.03.09 22:13:55 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [components/HUVnbarETl-0YLXPJLktSA] deleting index
2024.03.09 22:13:56 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [issues/MGQDJdVHSxydwBgbEvslRQ] deleting index
2024.03.09 22:13:56 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [projectmeasures/8hxCT7kYRpGs1Hwm3Yf8qg] deleting index
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [rules/9MF3Ca2DRZqpPoc_5MeA3A] deleting index
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [users/cYaC9ehgRhW5oKmYHi3UgA] deleting index
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataDeleteIndexService] [views/dJGFFONyTgSBVEnY1IYplQ] deleting index
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataCreateIndexService] [components] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 22:13:57 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[components][4]]]).
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataMappingService] [components/pVkeceH3RFa4gy54A1kSqg] create_mapping [auth]
2024.03.09 22:13:57 INFO  es[][o.e.c.m.MetadataCreateIndexService] [projectmeasures] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 22:13:58 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[projectmeasures][4]]]).
2024.03.09 22:13:58 INFO  es[][o.e.c.m.MetadataMappingService] [projectmeasures/qLJjEDTkR0eUTH4h6w3Rkg] create_mapping [auth]
2024.03.09 22:13:58 INFO  es[][o.e.c.m.MetadataCreateIndexService] [rules] creating index, cause [api], templates [], shards [2]/[0]
2024.03.09 22:13:58 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[rules][0]]]).
2024.03.09 22:13:58 INFO  es[][o.e.c.m.MetadataMappingService] [rules/2XROtOvCRbeWkkuF-chgRg] create_mapping [rule]
2024.03.09 22:13:58 INFO  es[][o.e.c.m.MetadataCreateIndexService] [issues] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 22:13:58 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[issues][4]]]).
2024.03.09 22:13:58 INFO  es[][o.e.c.m.MetadataMappingService] [issues/27WkXL3xRj-cBXa14TgtSg] create_mapping [auth]
2024.03.09 22:13:59 INFO  es[][o.e.c.m.MetadataCreateIndexService] [users] creating index, cause [api], templates [], shards [1]/[0]
2024.03.09 22:13:59 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[users][0]]]).
2024.03.09 22:13:59 INFO  es[][o.e.c.m.MetadataMappingService] [users/k8sNMAudQduij3Mu7IM56A] create_mapping [user]
2024.03.09 22:13:59 INFO  es[][o.e.c.m.MetadataCreateIndexService] [views] creating index, cause [api], templates [], shards [5]/[0]
2024.03.09 22:13:59 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[views][4]]]).
2024.03.09 22:13:59 INFO  es[][o.e.c.m.MetadataMappingService] [views/YqayW6kkSSiSkh2HJFqcjA] create_mapping [view]
2024.03.09 22:14:31 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 22:14:31 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [30s] to [-1]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 22:14:32 INFO  es[][o.e.c.s.IndexScopedSettings] updating [index.refresh_interval] from [-1] to [30s]
2024.03.09 22:18:55 INFO  es[][o.e.n.Node] stopping ...
2024.03.09 22:18:55 INFO  es[][o.e.n.Node] stopped
2024.03.09 22:18:55 INFO  es[][o.e.n.Node] closing ...
2024.03.09 22:18:55 INFO  es[][o.e.n.Node] closed
2024.03.09 22:19:36 INFO  es[][o.e.n.Node] version[7.17.8], pid[769], build[default/tar/120eabe1c8a0cb2ae87cffc109a5b65d213e9df1/2022-12-02T17:33:09.727072865Z], OS[Linux/6.5.0-1014-aws/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/17.0.10/17.0.10+7]
2024.03.09 22:19:36 INFO  es[][o.e.n.Node] JVM home [/usr/lib/jvm/temurin-17-jdk-amd64]
2024.03.09 22:19:36 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=/opt/sonarqube/logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2024.03.09 22:19:37 INFO  es[][o.e.p.PluginsService] no plugins loaded
2024.03.09 22:19:37 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [25.4gb], net total_space [28.8gb], types [ext4]
2024.03.09 22:19:37 INFO  es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true]
2024.03.09 22:19:37 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [BSca3hYoQAO6-GbRgqBgZw], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]
2024.03.09 22:19:41 INFO  es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}]
2024.03.09 22:19:41 INFO  es[][o.e.i.r.RecoverySettings] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
2024.03.09 22:19:41 INFO  es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2024.03.09 22:19:42 INFO  es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2024.03.09 22:19:42 INFO  es[][o.e.n.Node] initialized
2024.03.09 22:19:42 INFO  es[][o.e.n.Node] starting ...
2024.03.09 22:19:42 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:34351}, bound_addresses {127.0.0.1:34351}
2024.03.09 22:19:42 INFO  es[][o.e.c.c.Coordinator] cluster UUID [O0zPi-7OQb6bthc-UjOhng]
2024.03.09 22:19:43 INFO  es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{qG17Dj9oRJ-2xmHZ7oSH2w}{127.0.0.1}{127.0.0.1:34351}{cdfhimrsw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 109, delta: master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{qG17Dj9oRJ-2xmHZ7oSH2w}{127.0.0.1}{127.0.0.1:34351}{cdfhimrsw}]}
2024.03.09 22:19:43 INFO  es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{BSca3hYoQAO6-GbRgqBgZw}{qG17Dj9oRJ-2xmHZ7oSH2w}{127.0.0.1}{127.0.0.1:34351}{cdfhimrsw}]}, term: 3, version: 109, reason: Publication{term=3, version=109}
2024.03.09 22:19:43 INFO  es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2024.03.09 22:19:43 INFO  es[][o.e.n.Node] started
2024.03.09 22:19:43 INFO  es[][o.e.g.GatewayService] recovered [7] indices into cluster_state
2024.03.09 22:19:45 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]]).

sonar.log:

2024.03.09 21:19:56 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2024.03.09 21:19:56 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:35109]
2024.03.09 21:19:56 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2024.03.09 21:19:56 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2024.03.09 21:20:04 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2024.03.09 21:20:04 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[WEB_SERVER] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/postgresql/postgresql-42.5.1.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process4366692646814768721properties
2024.03.09 21:20:43 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up
2024.03.09 21:20:43 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[COMPUTE_ENGINE] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/postgresql/postgresql-42.5.1.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process17043014305850564531properties
2024.03.09 21:20:44 WARN  app[][startup] ####################################################################################################################
2024.03.09 21:20:44 WARN  app[][startup] Default Administrator credentials are still being used. Make sure to change the password or deactivate the account.
2024.03.09 21:20:44 WARN  app[][startup] ####################################################################################################################
2024.03.09 21:20:48 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
2024.03.09 21:20:48 INFO  app[][o.s.a.SchedulerImpl] SonarQube is operational
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Sonarqube has been requested to stop
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Stopping [Compute Engine] process...
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Process[Compute Engine] is stopped
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Stopping [Web Server] process...
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Stopping [ElasticSearch] process...
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Process[Web Server] is stopped
2024.03.09 22:12:56 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 143
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2024.03.09 22:12:56 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2024.03.09 22:13:34 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2024.03.09 22:13:34 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:42143]
2024.03.09 22:13:34 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2024.03.09 22:13:34 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2024.03.09 22:13:46 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2024.03.09 22:13:46 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[WEB_SERVER] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/h2/h2-2.1.214.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process572050563954628283properties
2024.03.09 22:14:32 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up
2024.03.09 22:14:32 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[COMPUTE_ENGINE] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/h2/h2-2.1.214.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process18256267709818689486properties
2024.03.09 22:14:33 WARN  app[][startup] ####################################################################################################################
2024.03.09 22:14:33 WARN  app[][startup] Default Administrator credentials are still being used. Make sure to change the password or deactivate the account.
2024.03.09 22:14:33 WARN  app[][startup] ####################################################################################################################
2024.03.09 22:14:37 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
2024.03.09 22:14:37 INFO  app[][o.s.a.SchedulerImpl] SonarQube is operational
2024.03.09 22:18:54 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2024.03.09 22:18:54 INFO  app[][o.s.a.SchedulerImpl] Sonarqube has been requested to stop
2024.03.09 22:18:54 INFO  app[][o.s.a.SchedulerImpl] Stopping [Compute Engine] process...
2024.03.09 22:18:54 INFO  app[][o.s.a.SchedulerImpl] Process[Compute Engine] is stopped
2024.03.09 22:18:54 INFO  app[][o.s.a.SchedulerImpl] Stopping [Web Server] process...
2024.03.09 22:18:55 INFO  app[][o.s.a.SchedulerImpl] Process[Web Server] is stopped
2024.03.09 22:18:55 INFO  app[][o.s.a.SchedulerImpl] Stopping [ElasticSearch] process...
2024.03.09 22:18:55 INFO  app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2024.03.09 22:18:55 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 143
2024.03.09 22:18:55 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2024.03.09 22:19:32 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2024.03.09 22:19:32 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:34351]
2024.03.09 22:19:32 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2024.03.09 22:19:33 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2024.03.09 22:19:45 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2024.03.09 22:19:45 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[WEB_SERVER] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/h2/h2-2.1.214.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process11783521727675828601properties
2024.03.09 22:19:57 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up
2024.03.09 22:19:57 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[COMPUTE_ENGINE] from [/opt/sonarqube]: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -Dcom.redhat.fips=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-9.9.0.65466.jar:/opt/sonarqube/lib/jdbc/h2/h2-2.1.214.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process10540551568600819230properties
2024.03.09 22:19:57 WARN  app[][startup] ####################################################################################################################
2024.03.09 22:19:57 WARN  app[][startup] Default Administrator credentials are still being used. Make sure to change the password or deactivate the account.
2024.03.09 22:19:57 WARN  app[][startup] ####################################################################################################################
2024.03.09 22:20:01 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
2024.03.09 22:20:01 INFO  app[][o.s.a.SchedulerImpl] SonarQube is operational>



Hey there.

First things first, can you make sure you’re using the latest version of SonarQube v9.9 LTS? The logs state you’re using v9.9.0 – and right now v9.9.4 is the latest version.

We can only help with sonarqube:latest and sonarqube:lts-community in this community. :slight_smile:

v9.9.4 still has the same issue.