ElasticSearch Connection refused process exited with exit value [es]: 1

Using SonarQube Docker image tag 7.6-community with Helm Chart deployment. When deploying the SonarQube Helm Chart, I see the following in the debug logs:

$ kubectl logs sonar-sonarqube-6ff4554ff4-sm8gs
.2019.02.09 18:06:00 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.02.09 18:06:00 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:35241
2019.02.09 18:06:00 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch -Epath.conf=/opt/sonarqube/temp/conf/es
2019.02.09 18:06:00 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.02.09 18:06:01 INFO  app[][o.e.p.PluginsService] no modules loaded
2019.02.09 18:06:01 INFO  app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [listener], size [8], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [index], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [8], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search], size [25], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [get], size [16], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [bulk], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.c.n.IfConfig] configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        UP LOOPBACK mtu:65536 index:1

eth0
        inet 172.30.93.146 netmask:255.255.255.255 broadcast:0.0.0.0 scope:site
        hardware 0A:8A:8C:1F:C2:3B
        UP MULTICAST mtu:1480 index:4

2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Module execution: 70ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] TypeListeners creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Scopes creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Converters creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding creation: 5ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Private environment creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Injector construction: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding initialization: 4ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding indexing: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Collecting injection requests: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance member validation: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Provider verification: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static member injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Preloading singletons: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] node_sampler_interval[5s]
2019.02.09 18:06:02 DEBUG app[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 32
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] Java version: 8
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] direct buffer constructor: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: 15010365440 bytes
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.CleanerJava6] java.nio.ByteBuffer.cleaner(): available
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] adding address [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}]
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.processId: 9 (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] Loopback interface: lo (lo, 127.0.0.1)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] /proc/sys/net/core/somaxconn: 128
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 0a:8a:8c:ff:fe:1f:c2:3b (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:35241] connect_timeout[30s]
	at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:362)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570)
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
	at org.elasticsearch.transport.TransportService.openConnection(TransportService.java:351)
	at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:407)
	at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:357)
	at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:198)
	at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:319)
	at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:109)
	at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:90)
	at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:75)
	at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:62)
	at org.sonar.application.process.EsProcessMonitor.checkStatus(EsProcessMonitor.java:90)
	at org.sonar.application.process.EsProcessMonitor.checkOperational(EsProcessMonitor.java:75)
	at org.sonar.application.process.EsProcessMonitor.isOperational(EsProcessMonitor.java:60)
	at org.sonar.application.process.SQProcess.refreshState(SQProcess.java:161)
	at org.sonar.application.process.SQProcess$EventWatcher.run(SQProcess.java:220)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:35241
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
	... 10 common frames omitted
2019.02.09 18:06:02 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:35241]
2019.02.09 18:06:02 WARN  app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2019.02.09 18:06:02 INFO  app[][o.s.a.SchedulerImpl] Process [es] is stopped
2019.02.09 18:06:02 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped

I’ve tried various ports for ElasticSearch and always see the same thing. I’ve even set sonar.search.port=0 in the sonar.properties to choose a free port, but I still see this issue. I keep seeing information about not running as root in relation to these error messages, but I’m using the helm chart and docker image, and from what I can tell it’s all running as the sonarqube user:

$ kubectl exec -it sonar-sonarqube-6ff4554ff4-4f7p2 bash
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ whoami
sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ pwd
/opt/sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ls -lah
total 52K
drwxr-xr-x 1 sonarqube sonarqube  4.0K Jan 22 09:17 .
drwxr-xr-x 1 root      root       4.0K Feb  6 17:41 ..
-rw-r--r-- 1 sonarqube sonarqube  7.5K Jan 22 09:11 COPYING
drwxr-xr-x 1 sonarqube sonarqube  4.0K Feb  6 17:41 bin
drwxrwsrwx 3 root      sonarqube  4.0K Feb  9 17:50 conf
drwxr-xr-x 2 nobody    4294967294 4.0K Feb  9 17:22 data
drwxr-xr-x 7 sonarqube sonarqube  4.0K Jan 22 09:12 elasticsearch
drwxr-xr-x 1 sonarqube sonarqube  4.0K Jan 22 09:12 extensions
drwxr-xr-x 6 sonarqube sonarqube  4.0K Jan 22 09:17 lib
drwxr-xr-x 2 sonarqube sonarqube  4.0K Jan 22 09:12 logs
drwxr-xr-x 2 sonarqube sonarqube  4.0K Jan 22 09:12 temp
drwxr-xr-x 9 sonarqube sonarqube  4.0K Jan 22 09:18 web
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ps aux
USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
sonarqu+      1  0.1  0.0   4292   768 ?        Ss   17:51   0:00 /bin/sh /usr/local/copy_plugins.sh
sonarqu+      9  0.0  0.0  19972  3848 pts/0    Ss   17:51   0:00 bash
sonarqu+     18  214  0.3 21173684 226328 ?     Sl   17:51   0:04 java -jar lib/sonar-application-7.6.jar -Dsonar.log.console=true -Dsonar.jdbc.username=sonarUser -Dsonar.jdbc.password=sonarPass -Dsonar.jdbc.url=jdbc:postgresql://sonar-po
sonarqu+     53  136  1.7 6216504 1150692 ?     Sl   17:51   0:02 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server
sonarqu+    150  0.0  0.0  38388  3188 pts/0    R+   17:51   0:00 ps aux

I’ve double checked the resources which are the following:

limits:
  cpu: 500m
  memory: 2Gi
requests:
  cpu: 500m
  memory: 2Gi

These are all within the limits of my Kubernetes cluster. I really only see this problem when using persistence, as I’ve been able to get SonarQube running before when persistence is set to false, but I am at a loss here. I could really use some help in figuring out what is going on with ElasticSearch.

I’ve also opened this issue against the Helm Chart for SonarQube.

Hi.

I don’t know anything about Helm, but… can you list the plugins in your SQ installation (/opt/sonarqube/extensions/plugins)?

I’ve seen this exactly same error when you try to start SonarQube with an incompatible plugin. Considering that SQ 7.6 broke some plugins, maybe you have some incompatible plugin in your docker volume (I’m not sure if that is the correct term, but I think you’ll understand). :wink:

If I am right, running with persistence set to false doesn’t cause a problem because you would be using the embedded plugins (which are compatible, of course).

I was using the following list of plugins:

# plugins:
#   install:
#     - "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
#     - "https://github.com/SonarSource/sonar-python/releases/download/1.8-RC1/sonar-python-plugin-1.8.0.1480.jar"
#     - "https://github.com/SonarSource/SonarTS/releases/download/v1.9.0/sonar-typescript-plugin-1.9.0.3766.jar"
#     - "https://github.com/SonarSource/SonarJS/releases/download/5.0.0.6962/sonar-javascript-plugin-5.0.0.6962.jar"
#     - "https://github.com/SonarSource/sonar-xml/releases/download/1.4.3.1017/sonar-xml-plugin-1.4.3.1017.jar"
#     - "https://github.com/kogitant/sonar-slack-notifier-plugin/releases/download/2.1.2/cks-slack-notifier-2.1.2.jar"

For what it’s worth, I switched to 7.4-community which is the version packed with the chart, and removed all of the plugin installations from the Helm Chart values.yaml. I re-deployed the chart and I still see the same behavior, but I listed the contents of the plugin directory and see that these are in there:

Contents of directory '/opt/sonarqube/extensions/plugins'
total 62M
drwxr-xr-x 1 sonarqube sonarqube  4.0K Feb 10 19:30 .
drwxr-xr-x 1 sonarqube sonarqube  4.0K Oct 29 09:19 ..
-rw-r--r-- 1 sonarqube sonarqube   224 Oct 29 09:19 README.txt
-rw-r--r-- 1 sonarqube sonarqube  1.7M Oct 29 09:19 sonar-csharp-plugin-7.7.0.7192.jar
-rw-r--r-- 1 sonarqube sonarqube  5.8M Oct 29 09:19 sonar-css-plugin-1.0.2.611.jar
-rw-r--r-- 1 sonarqube sonarqube  1.5M Oct 29 09:19 sonar-flex-plugin-2.4.0.1222.jar
-rw-r--r-- 1 sonarqube sonarqube  3.8M Oct 29 09:19 sonar-go-plugin-1.1.0.1612.jar
-rw-r--r-- 1 sonarqube sonarqube   15K Oct 29 09:19 sonar-jacoco-plugin-1.0.1.143.jar
-rw-r--r-- 1 sonarqube sonarqube  8.8M Oct 29 09:19 sonar-java-plugin-5.8.0.15699.jar
-rw-r--r-- 1 sonarqube sonarqube  6.3M Oct 29 09:19 sonar-javascript-plugin-5.0.0.6962.jar
-rw-r--r-- 1 sonarqube sonarqube  5.4M Oct 29 09:19 sonar-kotlin-plugin-1.2.1.2009.jar
-rw-r--r-- 1 sonarqube sonarqube  294K Oct 29 09:19 sonar-ldap-plugin-2.2.0.608.jar
-rw-r--r-- 1 sonarqube sonarqube  2.6M Oct 29 09:19 sonar-php-plugin-2.14.0.3569.jar
-rw-r--r-- 1 sonarqube sonarqube  1.5M Oct 29 09:19 sonar-python-plugin-1.10.0.2131.jar
-rw-r--r-- 1 sonarqube sonarqube  9.7M Oct 29 09:19 sonar-ruby-plugin-1.2.1.2009.jar
-rw-r--r-- 1 sonarqube sonarqube  2.6M Oct 29 09:19 sonar-scm-git-plugin-1.6.0.1349.jar
-rw-r--r-- 1 sonarqube sonarqube  6.9M Oct 29 09:19 sonar-scm-svn-plugin-1.9.0.1295.jar
-rw-r--r-- 1 sonarqube sonarqube  2.1M Oct 29 09:19 sonar-typescript-plugin-1.8.0.3332.jar
-rw-r--r-- 1 sonarqube sonarqube  1.2M Oct 29 09:19 sonar-vbnet-plugin-7.7.0.7192.jar
-rw-r--r-- 1 sonarqube sonarqube  1.8M Oct 29 09:19 sonar-xml-plugin-1.5.1.1452.jar
drwxr-xr-x 2 nobody    4294967294 4.0K Feb 10 19:21 tmp

HI, @eddie-at-ibm , you can set deployment.spec.template.spec.securityContext.runAsUser: 999 fix it.

like this:

helm values.yaml

# Set security context for sonarqube pod
securityContext:
  fsGroup: 999
  runAsUser: 999