Using SonarQube Docker image tag 7.6-community
with Helm Chart deployment. When deploying the SonarQube Helm Chart, I see the following in the debug logs:
$ kubectl logs sonar-sonarqube-6ff4554ff4-sm8gs
.2019.02.09 18:06:00 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.02.09 18:06:00 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:35241
2019.02.09 18:06:00 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch -Epath.conf=/opt/sonarqube/temp/conf/es
2019.02.09 18:06:00 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.02.09 18:06:01 INFO app[][o.e.p.PluginsService] no modules loaded
2019.02.09 18:06:01 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [listener], size [8], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [index], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [8], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search], size [25], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [get], size [16], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [bulk], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.c.n.IfConfig] configuration:
lo
inet 127.0.0.1 netmask:255.0.0.0 scope:host
UP LOOPBACK mtu:65536 index:1
eth0
inet 172.30.93.146 netmask:255.255.255.255 broadcast:0.0.0.0 scope:site
hardware 0A:8A:8C:1F:C2:3B
UP MULTICAST mtu:1480 index:4
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Module execution: 70ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] TypeListeners creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Scopes creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Converters creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding creation: 5ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Private environment creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Injector construction: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding initialization: 4ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding indexing: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Collecting injection requests: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance member validation: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Provider verification: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static member injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Preloading singletons: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] node_sampler_interval[5s]
2019.02.09 18:06:02 DEBUG app[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 32
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] Java version: 8
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] direct buffer constructor: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: 15010365440 bytes
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.CleanerJava6] java.nio.ByteBuffer.cleaner(): available
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] adding address [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}]
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.processId: 9 (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] Loopback interface: lo (lo, 127.0.0.1)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] /proc/sys/net/core/somaxconn: 128
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 0a:8a:8c:ff:fe:1f:c2:3b (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:35241] connect_timeout[30s]
at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:362)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
at org.elasticsearch.transport.TransportService.openConnection(TransportService.java:351)
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:407)
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:357)
at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:319)
at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:109)
at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:90)
at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:75)
at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:62)
at org.sonar.application.process.EsProcessMonitor.checkStatus(EsProcessMonitor.java:90)
at org.sonar.application.process.EsProcessMonitor.checkOperational(EsProcessMonitor.java:75)
at org.sonar.application.process.EsProcessMonitor.isOperational(EsProcessMonitor.java:60)
at org.sonar.application.process.SQProcess.refreshState(SQProcess.java:161)
at org.sonar.application.process.SQProcess$EventWatcher.run(SQProcess.java:220)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:35241
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 10 common frames omitted
2019.02.09 18:06:02 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:35241]
2019.02.09 18:06:02 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2019.02.09 18:06:02 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2019.02.09 18:06:02 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
I’ve tried various ports for ElasticSearch and always see the same thing. I’ve even set sonar.search.port=0
in the sonar.properties
to choose a free port, but I still see this issue. I keep seeing information about not running as root
in relation to these error messages, but I’m using the helm chart and docker image, and from what I can tell it’s all running as the sonarqube
user:
$ kubectl exec -it sonar-sonarqube-6ff4554ff4-4f7p2 bash
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ whoami
sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ pwd
/opt/sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ls -lah
total 52K
drwxr-xr-x 1 sonarqube sonarqube 4.0K Jan 22 09:17 .
drwxr-xr-x 1 root root 4.0K Feb 6 17:41 ..
-rw-r--r-- 1 sonarqube sonarqube 7.5K Jan 22 09:11 COPYING
drwxr-xr-x 1 sonarqube sonarqube 4.0K Feb 6 17:41 bin
drwxrwsrwx 3 root sonarqube 4.0K Feb 9 17:50 conf
drwxr-xr-x 2 nobody 4294967294 4.0K Feb 9 17:22 data
drwxr-xr-x 7 sonarqube sonarqube 4.0K Jan 22 09:12 elasticsearch
drwxr-xr-x 1 sonarqube sonarqube 4.0K Jan 22 09:12 extensions
drwxr-xr-x 6 sonarqube sonarqube 4.0K Jan 22 09:17 lib
drwxr-xr-x 2 sonarqube sonarqube 4.0K Jan 22 09:12 logs
drwxr-xr-x 2 sonarqube sonarqube 4.0K Jan 22 09:12 temp
drwxr-xr-x 9 sonarqube sonarqube 4.0K Jan 22 09:18 web
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
sonarqu+ 1 0.1 0.0 4292 768 ? Ss 17:51 0:00 /bin/sh /usr/local/copy_plugins.sh
sonarqu+ 9 0.0 0.0 19972 3848 pts/0 Ss 17:51 0:00 bash
sonarqu+ 18 214 0.3 21173684 226328 ? Sl 17:51 0:04 java -jar lib/sonar-application-7.6.jar -Dsonar.log.console=true -Dsonar.jdbc.username=sonarUser -Dsonar.jdbc.password=sonarPass -Dsonar.jdbc.url=jdbc:postgresql://sonar-po
sonarqu+ 53 136 1.7 6216504 1150692 ? Sl 17:51 0:02 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server
sonarqu+ 150 0.0 0.0 38388 3188 pts/0 R+ 17:51 0:00 ps aux
I’ve double checked the resources which are the following:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
These are all within the limits of my Kubernetes cluster. I really only see this problem when using persistence, as I’ve been able to get SonarQube running before when persistence is set to false, but I am at a loss here. I could really use some help in figuring out what is going on with ElasticSearch.
I’ve also opened this issue against the Helm Chart for SonarQube.