Hi Sonar community,
I am upgrading 9.9 LTS to 2025.1.2 LTS. The upgrade went well when I tested in staging environment. I have faced issue during prod upgrade. The issue was elastic search bulk index getting timed out with error below.
2025.06.03 07:37:09 ERROR web[o.s.s.es.BulkIndexer] Fail to execute bulk index request: org.elasticsearch.action.bulk.BulkRequest/unset
java.net.SocketTimeoutException: 60,000 milliseconds timeout on connection http-outgoing-2 [ACTIVE
When I compared the DB upgrade logs for staging and prod, found the below step of Drop Elasticsearch indeices missing in prod upgrade logs
2025.06.05 08:41:22 INFO web[o.s.s.p.d.m.s.MassUpdate] 0 rows processed (0 items/sec)
2025.06.05 08:41:22 INFO web[o.s.s.e.MigrationEsClientImpl] Drop Elasticsearch indices [projectmeasures]
Is there a relation with this step missing in prod upgrade causing the elastic search to timeout. My both hosts are in same network. Wondering why this timeout only happened for production instance. Does it needs any elastic serach timeout config or is purely DB related. Please let me know.
Whenever I upgrade (did it countless of times), i don’t configure anything in elastic search. It might be a network latency as previously mentioned by our friend here - G Ann.
Hi Ann,
Thanks for your reply. Even I have never done any configuration in elastic search in previous upgrade. So got wondered when I hit with this issue. Like you mentioned the network latency can be a reason. Certainly I will look into that. I am attaching the web.log for your review. sonarqube-LTS-2025-log.txt (56.6 KB)
2025.06.03 07:41:53 INFO es[][o.e.h.n.s.HealthNodeTaskExecutor] Node [{sonarqube}{VNTheoQLQY6cGShhlOhtCg}] is selected as the current health node.
2025.06.03 07:41:53 INFO es[][o.e.l.ClusterStateLicenseService] license [8364c092-c359-43fa-a4b6-43becc32c646] mode [basic] - valid
2025.06.03 07:41:54 INFO es[][o.e.c.m.MetadataCreateIndexService] [metadatas] creating index, cause [api], templates [], shards [1]/[0]
2025.06.03 07:42:46 INFO es[][o.e.c.r.a.DiskThresholdMonitor] skipping monitor as a check is already in progress
2025.06.03 07:42:54 INFO es[][o.e.n.Node] stopping ...
2025.06.03 07:42:54 INFO es[][o.e.c.f.AbstractFileWatchingService] shutting down watcher thread
2025.06.03 07:42:54 INFO es[][o.e.c.f.AbstractFileWatchingService] watcher service stopped
2025.06.03 07:42:54 INFO es[][o.e.n.Node] stopped
2025.06.03 07:42:54 INFO es[][o.e.n.Node] closing ...
And then the stacktraces start. With a Caused by clause of
So the question is why Elasticsearch starts up and then stops. How are you running SonarQube? Is it from the zip? Docker? Helm? (There’s a reason we ask these questions in the topic template. ) I suspect something external to SonarQube is sending a shutdown signal shortly after startup.
Hi Ann,
We are running sonarqube from a zip installation. So what I did was grabbed the zip install from download center and started sonarqube after copying all existing config from current version to the sonar.properties of new version and started. if you suspect it is an external issue making elastic search to fail, let me investigate the VM and update here. Appreciate your help with this. Next time will keep the format of raising support intact.
@ganncamp ,
I was watching the current logs of elastic search. I can see these logs. Is node always expects 5s threshold for health check. This is from service to which I rolled back(v9.9.2 LTS). Please evaluate and let me know. Meanwhile I didn’t see any issues with network latency and other process related issues that is impacting sonarqube in the current version it is running. Also I beleive with 2025 LTS, the elastic serach is creating a folder structure like /data/es8/nodes.
Logs below.
2025.06.16 22:06:07 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [6603ms] which is above the warn threshold of [5s]
2025.06.16 22:10:44 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [36217ms] which is above the warn threshold of [5s]
2025.06.16 22:23:12 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [14607ms] which is above the warn threshold of [5s]
2025.06.16 22:25:26 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [13806ms] which is above the warn threshold of [5s]
2025.06.16 22:27:43 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [17208ms] which is above the warn threshold of [5s]
2025.06.16 22:29:49 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [6403ms] which is above the warn threshold of [5s]
2025.06.16 22:31:58 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [9205ms] which is above the warn threshold of [5s]
2025.06.16 22:46:17 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [7804ms] which is above the warn threshold of [5s]
2025.06.16 22:58:30 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [8004ms] which is above the warn threshold of [5s]
2025.06.16 23:06:45 WARN es[][o.e.m.f.FsHealthService] health check of [/data/sonarqube/sonarqube-9.9.2.77730/data/es7/nodes/0] took [8204ms] which is above the warn threshold of [5s]