SonarQube 7.3 hangs on first startup

I’m trying to do the “get started in 2 minutes”, and I can’t get it to run. I’ve downloaded and started it, but it never starts listening on port 9000. I’m running Ubuntu 18.04 64-bit with

$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-0ubuntu0.18.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

I set sonar.log.level=TRACE in sonar.properties to see what’s happening. I get this, and then it just stops. Any help?

$ bin/linux-x86-64/sonar.sh console
Running SonarQube...
wrapper  | --> Wrapper Started as Console
wrapper  | Launching a JVM...
jvm 1    | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
jvm 1    |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
jvm 1    | 
jvm 1    | 2018.08.28 10:22:04 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/ryan/dev/tools/sonarqube-7.3/temp
jvm 1    | 2018.08.28 10:22:04 TRACE app[][o.s.a.NodeLifecycle] tryToMoveTo from INIT to STARTING => true
jvm 1    | 2018.08.28 10:22:04 TRACE app[][o.s.a.p.Lifecycle] tryToMoveTo es from INIT to STARTING => true
jvm 1    | 2018.08.28 10:22:04 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
jvm 1    | 2018.08.28 10:22:04 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/ryan/dev/tools/sonarqube-7.3/elasticsearch]: /home/ryan/dev/tools/sonarqube-7.3/elasticsearch/bin/elasticsearch -Epath.conf=/home/ryan/dev/tools/sonarqube-7.3/temp/conf/es
jvm 1    | 2018.08.28 10:22:04 TRACE app[][o.s.a.p.Lifecycle] tryToMoveTo es from STARTING to STARTED => true
jvm 1    | 2018.08.28 10:22:04 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
jvm 1    | 2018.08.28 10:22:05 TRACE app[][o.e.p.PluginsService] plugin loaded from classpath [- Plugin information:
jvm 1    | Name: org.elasticsearch.transport.Netty4Plugin
jvm 1    | Description: classpath plugin
jvm 1    | Version: NA
jvm 1    | Native Controller: false
jvm 1    |  * Classname: org.elasticsearch.transport.Netty4Plugin]
jvm 1    | 2018.08.28 10:22:05 INFO  app[][o.e.p.PluginsService] no modules loaded
jvm 1    | 2018.08.28 10:22:05 INFO  app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [8], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [listener], size [2], queue size [unbounded]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [index], size [4], queue size [200]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [2], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [2], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search], size [7], queue size [1k]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [2], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [8], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [get], size [4], queue size [1k]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [bulk], size [4], queue size [200]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [2], keep alive [5m]
jvm 1    | 2018.08.28 10:22:05 DEBUG app[][o.e.c.n.IfConfig] configuration:
jvm 1    | 
jvm 1    | lo
jvm 1    |         inet 127.0.0.1 netmask:255.0.0.0 scope:host
jvm 1    |         UP LOOPBACK mtu:65536 index:1
jvm 1    | 
jvm 1    | eno1
jvm 1    |         inet 10.66.4.45 netmask:255.255.252.0 broadcast:10.66.7.255 scope:site
jvm 1    |         inet6 fe80::f414:faa2:2581:f9d4 prefixlen:64 scope:link
jvm 1    |         hardware F8:CA:B8:01:CF:F7
jvm 1    |         UP MULTICAST mtu:1500 index:2
jvm 1    | 
jvm 1    | wlp2s0
jvm 1    |         inet 172.16.29.110 netmask:255.255.240.0 broadcast:172.16.31.255 scope:site
jvm 1    |         inet6 fe80::401a:61b3:c661:5010 prefixlen:64 scope:link
jvm 1    |         hardware DC:53:60:6E:CF:53
jvm 1    |         UP MULTICAST mtu:1500 index:3
jvm 1    | 
jvm 1    | docker0
jvm 1    |         inet 172.17.0.1 netmask:255.255.0.0 broadcast:0.0.0.0 scope:site
jvm 1    |         hardware 02:42:72:70:05:CA
jvm 1    |         MULTICAST mtu:1500 index:4
jvm 1    | 
jvm 1    | vboxnet0
jvm 1    |         inet 192.168.2.1 netmask:255.255.255.0 broadcast:192.168.2.255 scope:site
jvm 1    |         hardware 0A:00:27:00:00:00
jvm 1    |         UP MULTICAST mtu:1500 index:6
jvm 1    | 

Hi,

You may find this documentation helpful: https://docs.sonarqube.org/display/SONAR/Troubleshooting

Also, be aware that you can’t run SonarQube as root. If you try, you’ll find that it gets halfway up and then shuts right back down.

Ann

Thanks for the pointer, Ann! I’m running as my own user, not root. I followed the instructions on the “getting started” page pretty exactly: 1) download zip, 2) extract to a dir, 3) run <extracted location>/bin/<os dir>/sonar.sh.

I did check the logs, but I neglected to mention that I only have a README.txt and sonar.log in the logs dir. The output there seems identical to the console output that I pasted in my OP. Funny thing, though, I had left the process running, and when I went back to it, I found that it had stalled for about 28 minutes and then kept going for a bit more before a bunch of exceptions started showing up. It seems to be the same ConnectException over and over, like something (ElasticSearch?) tried to start for 28 minutes, but failed. Here are the 28-minute-delayed logs, starting from the last log event that I originally posted:

2018.08.28 10:22:05 DEBUG app[][o.e.c.n.IfConfig] configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        UP LOOPBACK mtu:65536 index:1

eno1
        inet 10.66.4.45 netmask:255.255.252.0 broadcast:10.66.7.255 scope:site
        inet6 fe80::f414:faa2:2581:f9d4 prefixlen:64 scope:link
        hardware F8:CA:B8:01:CF:F7
        UP MULTICAST mtu:1500 index:2

wlp2s0
        inet 172.16.29.110 netmask:255.255.240.0 broadcast:172.16.31.255 scope:site
        inet6 fe80::401a:61b3:c661:5010 prefixlen:64 scope:link
        hardware DC:53:60:6E:CF:53
        UP MULTICAST mtu:1500 index:3

docker0
        inet 172.17.0.1 netmask:255.255.0.0 broadcast:0.0.0.0 scope:site
        hardware 02:42:72:70:05:CA
        MULTICAST mtu:1500 index:4

vboxnet0
        inet 192.168.2.1 netmask:255.255.255.0 broadcast:192.168.2.255 scope:site
        hardware 0A:00:27:00:00:00
        UP MULTICAST mtu:1500 index:6

2018.08.28 10:50:01 TRACE app[][o.e.i.b.HierarchyCircuitBreakerService] parent circuit breaker with settings [parent,type=PARENT,limit=20919091/19.9mb,overhead=1.0]
2018.08.28 10:50:01 TRACE app[][o.e.i.b.request] creating ChildCircuitBreaker with settings [request,type=MEMORY,limit=17930649/17mb,overhead=1.0]
2018.08.28 10:50:01 TRACE app[][o.e.i.b.fielddata] creating ChildCircuitBreaker with settings [fielddata,type=MEMORY,limit=17930649/17mb,overhead=1.03]
2018.08.28 10:50:01 TRACE app[][o.e.i.b.in_flight_requests] creating ChildCircuitBreaker with settings [in_flight_requests,type=MEMORY,limit=29884416/28.5mb,overhead=1.0]
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Module execution: 55ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] TypeListeners creation: 1ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Scopes creation: 1ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Converters creation: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Binding creation: 5ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Private environment creation: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Injector construction: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Binding initialization: 2ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Binding indexing: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Collecting injection requests: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Binding validation: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Static validation: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Instance member validation: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Provider verification: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Static member injection: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Instance injection: 0ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.i.i.Stopwatch] Preloading singletons: 1ms
2018.08.28 10:50:01 DEBUG app[][o.e.c.t.TransportClientNodesService] node_sampler_interval[5s]
2018.08.28 10:50:01 DEBUG app[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 8
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] Java version: 8
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] direct buffer constructor: available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: 29884416 bytes
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.CleanerJava6] java.nio.ByteBuffer.cleaner(): available
2018.08.28 10:50:01 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2018.08.28 10:50:01 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2018.08.28 10:50:01 DEBUG app[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@606407d2
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@7f0506ef
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@3f9fd44a
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@4a4d8293
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@7d507606
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@2ab5353d
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@6b4999a5
2018.08.28 10:50:02 TRACE app[][i.n.c.n.NioEventLoop] instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@4db6d745
2018.08.28 10:50:02 DEBUG app[][o.e.c.t.TransportClientNodesService] adding address [{#transport#-1}{Xf1nOcLtR-a3tZvcjcJK9A}{127.0.0.1}{127.0.0.1:9001}]
2018.08.28 10:50:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1723 (auto-detected)
2018.08.28 10:50:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2018.08.28 10:50:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2018.08.28 10:50:02 DEBUG app[][i.netty.util.NetUtil] Loopback interface: lo (lo, 127.0.0.1)
2018.08.28 10:50:02 DEBUG app[][i.netty.util.NetUtil] /proc/sys/net/core/somaxconn: 128
2018.08.28 10:50:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.machineId: dc:53:60:ff:fe:6e:cf:53 (auto-detected)
2018.08.28 10:50:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2018.08.28 10:50:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 0
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 0
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2018.08.28 10:50:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2018.08.28 10:50:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2018.08.28 10:50:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536
2018.08.28 10:50:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2018.08.28 10:50:02 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0x8e797d2d] REGISTERED
2018.08.28 10:50:02 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0x8e797d2d] CONNECT: /127.0.0.1:9001
2018.08.28 10:50:02 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0x8e797d2d] CLOSE
2018.08.28 10:50:02 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0x8e797d2d] UNREGISTERED
2018.08.28 10:50:02 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{Xf1nOcLtR-a3tZvcjcJK9A}{127.0.0.1}{127.0.0.1:9001}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9001] connect_timeout[30s]
	at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:362)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
	at org.elasticsearch.transport.TransportService.openConnection(TransportService.java:351)
	at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:407)
	at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:357)
	at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:198)
	at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:319)
	at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:109)
	at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:90)
	at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:75)
	at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:62)
	at org.sonar.application.process.EsProcessMonitor.checkStatus(EsProcessMonitor.java:90)
	at org.sonar.application.process.EsProcessMonitor.checkOperational(EsProcessMonitor.java:75)
	at org.sonar.application.process.EsProcessMonitor.isOperational(EsProcessMonitor.java:60)
	at org.sonar.application.process.SQProcess.refreshState(SQProcess.java:161)
	at org.sonar.application.process.SQProcess$EventWatcher.run(SQProcess.java:220)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9001
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
	... 10 common frames omitted
2018.08.28 10:50:02 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:9001]
2018.08.28 10:50:06 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0xd6a30b44] REGISTERED
2018.08.28 10:50:06 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0xd6a30b44] CONNECT: /127.0.0.1:9001
2018.08.28 10:50:06 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0xd6a30b44] CLOSE
2018.08.28 10:50:06 TRACE app[][o.e.t.n.ESLoggingHandler] [id: 0xd6a30b44] UNREGISTERED
2018.08.28 10:50:06 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{Xf1nOcLtR-a3tZvcjcJK9A}{127.0.0.1}{127.0.0.1:9001}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9001] connect_timeout[30s]
	at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:362)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
[...]

Hi,

You should double-check that the process has 777 (rwx) on the logs directory (yes, I know it’s running as you & you exploded the zip, so it should have. I’m just crossing my t’s.) and that it has access to all the ports it needs (documented and configurable in $SONARQUBE_HOME/conf/sonar.properties). Off hand it looks like Elasticsearch isn’t cranking up properly. (It runs on port 9001.)

Ann

logs is 755 right now. I’ll try setting to 777, but since the whole process is running as me, it should be using the same user throughout and be able to write there fine. I think the problem is that the other things just aren’t getting started.

I can also bind port 9001 just fine as myself and send data back and forth via netcat, so I don’t think that’s the issue, either.

$ nc -lp 9001
foo
bar

Does anything in sonar startup rely on connecting out to the public internet? I’m at work now, and it’s possible something’s getting blocked.

Same behavior with chmod 777 logs. It gets to the list of network interfaces and then hangs. Is it possible something about my interface configuration is confusing it? I have both docker and virtualbox installed, so they have their own host-side interfaces set up, in addition to wireless, wired, and localhost, as you can see in the log output.

It must be something network-related. I tried again at home, and it came right up.

1 Like

Quick thought on this thread: on initial troubleshooting I would actually advice against enabling TRACE logs. This will turn-on highly verbose logs, which might cause more confusion (especially in startup sequence) than helping with nailing down a precise error).

Keeping logs in INFO or DEBUG, and simply reviewing all 3 log files, should help narrow down the one ERROR log that is fatal.

At the same time if there’s indeed been an execution with a wrong user (e.g. privileged root) then it’s also worth taking a new vanilla start, to avoid any collateral of invalidly set permissions.

I am using 7.5 and postgresql or mysql same problem is accured. log directory permission is correct but probem is not resolved.

drwxrwxrwx 2 sonarqube sonarqube 4.0K Jan 4 08:14 logs