- Version SonarQube 7.1 - Postgres 9.3
- Attempting to upgrade SQ 7.1 to 7.4 (or later)
- See description below
Hello all,
I am attempting to upgrade my SQ from 7.1 to 7.4 or newer. I have attempted to upgrade to 7.4, and I am directed to go to localhost/setup after the upgrade to migrate the db. Before the db migration, I execute VACUUM FULL on the sonarqube db. Then I proceed to clicking “Upgrade” on the setup page. After which, I am met with limited information in the logs about what is going on. All that keeps happening, elastic search just keeps restarting over and over, but reason why. I am thinking my tables are messed up somehow, but I have no prior knowledge of what has been done to this db. Before the upgrade, everything works – I can even restore from a pg_dumpall that I did before the upgrade, so I can restore this server to it’s original working configuration.
I have attempted to use PGSQL 9.6 as well, but same issue, ES just keeps looping.
Any feedback is welcomed!
sonar.log:
2019.08.14 13:30:35 DEBUG app[][o.e.t.ConnectionManager] connected to node [{sonarqube}{lWboEdW8TdSxOXzTB-kI6Q}{XRbwX63rRuyfxxjXNOZeTw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}]
2019.08.14 13:30:36 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2019.08.14 13:30:36 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube-7.9.1]: /usr/lib/jvm/java-11-openjdk-11.0.1.13-3.el7_6.x86_64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube-7.9.1/temp --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/common/*:/opt/sonarqube-7.9.1/lib/jdbc/postgresql/postgresql-42.2.5.jar org.sonar.server.app.WebServer /opt/sonarqube-7.9.1/temp/sq-process8565573925086191741properties
2019.08.14 13:30:46 DEBUG app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [web]: 0
2019.08.14 13:30:46 INFO app[][o.s.a.SchedulerImpl] Process[web] is stopped
2019.08.14 13:30:46 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 143
2019.08.14 13:30:46 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2019.08.14 13:30:46 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2019.08.14 13:30:46 DEBUG app[][o.s.a.SchedulerImpl] Stopping [ce]...
2019.08.14 13:30:46 DEBUG app[][o.s.a.SchedulerImpl] Stopping [web]...
2019.08.14 13:30:46 DEBUG app[][o.s.a.SchedulerImpl] Stopping [es]...
<-- Wrapper Stopped
es.log:
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{lWboEdW8TdSxOXzTB-kI6Q}{amcUC63FTGO2Ta5AYRk0Jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]: took [8ms] done applying updated cluster state (version: 1, uuid: pJHMSXrIQ_ysVMG9m_5kxw)
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] processing [zen-disco-elected-as-master ([0] nodes joined)]: took [39ms] done publishing updated cluster state (version: 1, uuid: pJHMSXrIQ_ysVMG9m_5kxw)
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: execute
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: took [0s] no change in cluster state
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: execute
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [2], source [local-gateway-elected-state]
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [2]
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{lWboEdW8TdSxOXzTB-kI6Q}{amcUC63FTGO2Ta5AYRk0Jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: execute
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [2], source [apply cluster state (from master [master {sonarqube}{lWboEdW8TdSxOXzTB-kI6Q}{amcUC63FTGO2Ta5AYRk0Jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 2
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 2
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 2
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{lWboEdW8TdSxOXzTB-kI6Q}{amcUC63FTGO2Ta5AYRk0Jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: took [33ms] done applying updated cluster state (version: 2, uuid: c2rCmZJeRlS89UXl4TMW4Q)
2019.08.14 13:32:54 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
2019.08.14 13:32:54 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: took [36ms] done publishing updated cluster state (version: 2, uuid: c2rCmZJeRlS89UXl4TMW4Q)
2019.08.14 13:32:55 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled
2019.08.14 13:32:55 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled
2019.08.14 13:32:55 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled
2019.08.14 13:32:55 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled
2019.08.14 13:32:56 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2019.08.14 13:32:56 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2019.08.14 13:32:56 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@3beef308
2019.08.14 13:33:05 INFO es[][o.e.n.Node] stopping ...
2019.08.14 13:33:05 INFO es[][o.e.n.Node] stopped
2019.08.14 13:33:05 INFO es[][o.e.n.Node] closing ...
2019.08.14 13:33:05 INFO es[][o.e.n.Node] closed