SonarQube 7.7 Developer upgrade not starting

After updating from 7.5 developer version to 7.7 developer version, SonarQube is not refusing to start giving the error message below in the log. Has anyone else come across this yet ?

2019.03.21 16:56:37 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	at org.elasticsearch.cluster.block.ClusterBlocks.indexBlockedException(ClusterBlocks.java:183)
	at org.elasticsearch.action.support.replication.TransportReplicationAction.blockExceptions(TransportReplicationAction.java:255)
	at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:100)
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:780)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:172)
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:100)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81)
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:420)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:533)
	at org.elasticsearch.action.bulk.TransportBulkAction.executeIngestAndBulk(TransportBulkAction.java:271)
	at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:222)
	at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:90)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)
	at org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction.doExecute(TransportSingleItemBulkWriteAction.java:69)
	at org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction.doExecute(TransportSingleItemBulkWriteAction.java:44)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:284)
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:276)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1289)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:140)
	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1247)
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1111)
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:914)
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:53)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
	at java.lang.Thread.run(Unknown Source)
2019.03.21 16:56:37 INFO  web[][o.s.p.StopWatcher] Stopping process

Having re-read the release description and spotting:

I upped my availible space from ~40GB to ~100GB and the problem went away…

2 Likes

Space in which directory? I have several mount points, all in /opt/sonarqube, and I’ve increased them all to about 200G free. Still having the same problem, however. Is this based on the space in / or in one of the /opt/sonarqube directories?

After increasing the free space, you need to delete the SQ_HOME/data/es6 folder and start SonarQube again (because all the indexes were marked as read-only).

5 Likes

Brilliant, thank you. That solved the problem. All back up and running again.

1 Like

Thanks I need this problem…
first step. Increasing the space of hard disk… to 20gb
second step: delete sonarqube/data/es6 folder

and this ok

Worked for me as well.

Hone can one change the elasticsearch space?

I receive the following in my console with 7.7

$ StartSonar.bat
wrapper | → Wrapper Started as Console
wrapper | Launching a JVM…
jvm 1 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
jvm 1 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
jvm 1 |
jvm 1 | 2019.07.30 11:00:40 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\temp
jvm 1 | 2019.07.30 11:00:40 INFO app[o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
jvm 1 | 2019.07.30 11:00:40 INFO app[o.s.a.p.ProcessLauncherImpl] Launch process[[key=‘es’, ipcIndex=1, logFilenamePrefix=es]] from [C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\elasticsearch]: C:\Hybris\sapjvm_8\jre\bin\java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\temp\es6 -XX:ErrorFile=C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\logs\es_hs_err_pid%p.log -Xms512m -Xmx512m -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.path.home=C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\elasticsearch -Des.path.conf=C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\temp\conf\es -cp lib/* org.elasticsearch.bootstrap.Elasticsearch
jvm 1 | 2019.07.30 11:00:40 INFO app[o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
jvm 1 | 2019.07.30 11:00:40 INFO app[o.e.p.PluginsService] no modules loaded
jvm 1 | 2019.07.30 11:00:40 INFO app[o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
jvm 1 | 2019.07.30 11:00:43 INFO app[o.s.a.SchedulerImpl] Process[es] is up
jvm 1 | 2019.07.30 11:00:43 INFO app[o.s.a.p.ProcessLauncherImpl] Launch process[[key=‘web’, ipcIndex=2, logFilenamePrefix=web]] from [C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7]: C:\Hybris\sapjvm_8\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*;C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\lib\jdbc\h2\h2-1.3.176.jar org.sonar.server.app.WebServer C:\Users\vijshett\Downloads\sonarqube-7.7\sonarqube-7.7\temp\sq-process5154821518586334139properties
jvm 1 | 2019.07.30 11:00:53 WARN app[o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
jvm 1 | 2019.07.30 11:00:53 INFO app[o.s.a.SchedulerImpl] Process [es] is stopped
jvm 1 | 2019.07.30 11:00:53 INFO app[o.s.a.SchedulerImpl] Process [web] is stopped
jvm 1 | 2019.07.30 11:00:53 INFO app[o.s.a.SchedulerImpl] SonarQube is stopped
wrapper | ← Wrapper Stopped

Resolved the issue by killing the used ports and restarting the system.

Thanks for this. This worked for us.

However I found we needed to delete both the old and new locations. On our server we had decided to install SonarQube 7.7 under a different home directory. We found we needed to also delete the old SonarQube 6.7 folders too.

Removed:
SQ_6.7_HOME/data/es5
SQ_7.7_HOME/data/es6

This works for me too after deleting es6.
However, does anyone know how much space is needed for es6? In my case, my data folder has 10gb total, and only 19% used, with 8gb free and the sonarqube failed to come up.

It does not make sense to me.

Does anyone know the requirement?

Thanks!

Dian

More info.

I even tried increasing my data folder to 30gb, but without deleting es6, it simply won’t come up.
There is no way it is a space issue. It is as if es6 got locked up.

Dian

Hi Dian,

You don’t say what version you’re using. Recent versions of SonarQube unlock the ES indices when they start up. Some earlier versions don’t. Your choices are:

  • delete the data folder & let the indices be rebuilt
  • upgrade to the latest version

 
HTH,
Ann