Container restarts after 2min

Hello, I have a similar problem with the start up probe. My container runs for two minutes and restarts. No error found in the logs. Sonarqube is up and running and serves fine for that two minutes.

│   Normal   Pulled     87s                kubelet, k8s-worker-green-toolsuat-5  Successfully pulled image "sonarqube:8.9.8-community" in 625.631223ms                                                                                       │
│   Normal   Created    87s                kubelet, k8s-worker-green-toolsuat-5  Created container sonarqube                                                                                                                                 │
│   Normal   Started    86s                kubelet, k8s-worker-green-toolsuat-5  Started container sonarqube                                                                                                                                 │
│   Warning  Unhealthy  43s (x2 over 53s)  kubelet, k8s-worker-green-toolsuat-5  Startup probe failed: Get "http://10.200.6.247:9000/api/system/status": dial tcp 10.200.6.247:9000: connect: connection refused                             │
│   Warning  Unhealthy  32s                kubelet, k8s-worker-green-toolsuat-5  Startup probe failed: Get "http://10.200.6.247:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)          │
│   Warning  Unhealthy  18s                kubelet, k8s-worker-green-toolsuat-5  Readiness probe failed:  

Here is my application log

│ 2022.06.30 02:08:20 INFO  web[][o.s.s.q.ProjectsInWarningDaemon] Counting number of projects in warning is not started as there are no projects in this situation.                                                                         │
│ 2022.06.30 02:08:20 INFO  web[][o.s.s.p.p.PlatformLevelStartup] Running Community Edition                                                                                                                                                  │
│ 2022.06.30 02:08:20 INFO  web[][o.s.s.p.Platform] WebServer is operational                                                                                                                                                                 │
│ 2022.06.30 02:08:20 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up                                                                                                                                                                    │
│ 2022.06.30 02:08:20 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='ce', ipcIndex=3, logFilenamePrefix=ce]] from [/opt/sonarqube]: /opt/java/openjdk/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir= │
│ 2022.06.30 02:08:23 INFO  ce[][o.s.p.ProcessEntryPoint] Starting ce                                                                                                                                                                        │
│ 2022.06.30 02:08:23 INFO  ce[][o.s.ce.app.CeServer] Compute Engine starting up...                                                                                                                                                          │
│ 2022.06.30 02:08:24 INFO  ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [http://localhost:9001]                                                                                                                         │
│ 2022.06.30 02:08:26 INFO  ce[][o.sonar.db.Database] Create JDBC data source for jdbc:oracle:thin:@cmoordbsvrp1301:1521/SONARDEV                                                                                                            │
│ 2022.06.30 02:08:32 INFO  ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube                                                                                                                                                │
│ 2022.06.30 02:08:34 INFO  ce[][o.s.c.c.CePluginRepository] Load plugins                                                                                                                                                                    │
│ 2022.06.30 02:08:36 INFO  ce[][o.s.c.c.ComputeEngineContainerImpl] Running Community edition                                                                                                                                               │
│ 2022.06.30 02:08:37 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is operational                                                                                                                                                          │
│ 2022.06.30 02:08:37 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up                                                                                                                                                                     │
│ 2022.06.30 02:08:37 INFO  app[][o.s.a.SchedulerImpl] SonarQube is up    

Appreciate it if you could provide insight on how to troubleshoot this.

Hi,

Welcome to the community!

I’ve moved your post to a new topic because… it’s a different topic.

You’ve provided a snippet of your application log. Have you checked the rest of your server logs?

I would expect to see something about the shutdown (which is missing from your application log snippet…) in them. Are you sure this shutdown isn’t being triggered externally? Based on the logs you have provided, everything looks good on the SonarQube side.

 
Ann

Hi Ann,

Below is some additional information from the logs for one of the timeframes.

Sonar.log

2022.06.30 14:09:36 INFO  app[][o.s.a.SchedulerImpl] SonarQube is up
2022.06.30 14:11:41 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2022.06.30 14:11:42 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is stopped
2022.06.30 14:11:44 INFO  app[][o.s.a.SchedulerImpl] Process[web] is stopped
2022.06.30 14:11:45 INFO  app[][o.s.a.SchedulerImpl] Process[es] is stopped
2022.06.30 14:11:45 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 143
2022.06.30 14:11:45 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2022.06.30 14:11:50 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2022.06.30 14:11:50 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:32883]
2022.06.30 14:11:50 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2022.06.30 14:11:50 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2022.06.30 14:12:27 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2022.06.30 14:12:27 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube]: /opt/java/openjdk/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-8.9.8.54436.jar:/opt/sonarqube/extensions/jdbc-driver/oracle/ojdbc8.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process2585512473554554590properties
2022.06.30 14:13:19 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up
2022.06.30 14:13:19 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='ce', ipcIndex=3, logFilenamePrefix=ce]] from [/opt/sonarqube]: /opt/java/openjdk/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/sonar-application-8.9.8.54436.jar:/opt/sonarqube/extensions/jdbc-driver/oracle/ojdbc8.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process10487739619929053066properties
2022.06.30 14:13:36 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
2022.06.30 14:13:36 INFO  app[][o.s.a.SchedulerImpl] SonarQube is up
2022.06.30 14:15:41 INFO  app[][o.s.a.SchedulerImpl] Stopping SonarQube
2022.06.30 14:15:43 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is stopped
2022.06.30 14:15:45 INFO  app[][o.s.a.SchedulerImpl] Process[web] is stopped
2022.06.30 14:15:45 INFO  app[][o.s.a.SchedulerImpl] Process[es] is stopped
2022.06.30 14:15:45 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2022.06.30 14:15:45 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 143

es.log

2022.06.30 14:12:27 INFO  es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[components][4], [components][1], [components][3]]]).
2022.06.30 14:15:45 INFO  es[][o.e.n.Node] stopping ...
2022.06.30 14:15:45 INFO  es[][o.e.n.Node] stopped
2022.06.30 14:15:45 INFO  es[][o.e.n.Node] closing ...
2022.06.30 14:15:45 INFO  es[][o.e.n.Node] closed
2022.06.30 14:21:03 INFO  es[][o.e.n.Node] version[7.16.2], pid[38], build[default/tar/2b937c44140b6559905130a8650c64dbd0879cfb/2021-12-18T19:42:46.604893745Z], OS[Linux/4.15.0-142-generic/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/11.0.13/11.0.13+8]

ce.log

2022.06.30 14:13:36 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is operational
2022.06.30 14:15:41 INFO  ce[][o.s.p.ProcessEntryPoint] Gracefully stopping process
2022.06.30 14:15:41 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is stopping...
2022.06.30 14:15:41 INFO  ce[][o.s.c.t.CeProcessingSchedulerImpl] Gracefully stopping workers...
2022.06.30 14:15:41 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is stopped
2022.06.30 14:22:24 INFO  ce[][o.s.p.ProcessEntryPoint] Starting ce

access.log

10.200.6.247 - - [30/Jun/2022:14:11:19 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG08d7FJjtsp0bnAAAI"
10.200.6.1 - - [30/Jun/2022:14:11:41 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG08d7FJjtsp0bnAAAJ"
10.200.6.1 - - [30/Jun/2022:14:12:47 +0000] "GET /api/system/status HTTP/1.1" 200 82 "-" "kube-probe/1.20" "-"
10.200.6.1 - - [30/Jun/2022:14:12:47 +0000] "GET /api/system/status HTTP/1.1" 200 82 "-" "kube-probe/1.20" "-"
10.200.6.1 - - [30/Jun/2022:14:12:54 +0000] "GET /api/system/status HTTP/1.1" 200 82 "-" "kube-probe/1.20" "-"
10.200.6.1 - - [30/Jun/2022:14:13:11 +0000] "GET /api/system/liveness HTTP/1.1" 404 57 "-" "kube-probe/1.20" "-"
10.200.6.247 - - [30/Jun/2022:14:13:19 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG09YiESBIBExrJAAAA"
10.200.6.1 - - [30/Jun/2022:14:13:41 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG09YiESBIBExrJAAAB"
10.200.6.247 - - [30/Jun/2022:14:13:49 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG09YiESBIBExrJAAAC"
10.200.6.1 - - [30/Jun/2022:14:14:11 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG09YiESBIBExrJAAAD"
10.200.6.247 - - [30/Jun/2022:14:14:19 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG09YiESBIBExrJAAAE"
10.200.6.1 - - [30/Jun/2022:14:14:41 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG09YiESBIBExrJAAAF"
10.200.6.247 - - [30/Jun/2022:14:14:49 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG09YiESBIBExrJAAAG"
10.200.6.1 - - [30/Jun/2022:14:15:11 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG09YiESBIBExrJAAAH"
10.200.6.247 - - [30/Jun/2022:14:15:19 +0000] "GET /api/system/status HTTP/1.1" 200 76 "-" "Wget" "AYG09YiESBIBExrJAAAI"
10.200.6.1 - - [30/Jun/2022:14:15:41 +0000] "GET /api/system/liveness HTTP/1.1" 401 - "-" "kube-probe/1.20" "AYG09YiESBIBExrJAAAJ"
10.200.6.1 - - [30/Jun/2022:14:21:49 +0000] "GET /api/system/status HTTP/1.1" 200 82 "-" "kube-probe/1.20" "-"
10.200.6.1 - - [30/Jun/2022:14:21:49 +0000] "GET /api/system/status HTTP/1.1" 200 82 "-" "kube-probe/1.20" "-"

web.log

2022.06.30 14:13:19 INFO  web[][o.s.s.p.Platform] WebServer is operational
2022.06.30 14:15:43 INFO  web[][o.s.p.ProcessEntryPoint] Gracefully stopping process
2022.06.30 14:15:43 INFO  web[][o.s.s.n.NotificationDaemon] Notification service stopped
2022.06.30 14:15:43 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n java.base@11.0.13/java.lang.Object.wait(Native Method)\n java.base@11.0.13/java.lang.Object.wait(Unknown Source)\n java.base@11.0.13/java.util.TimerThread.mainLoop(Unknown Source)\n java.base@11.0.13/java.util.TimerThread.run(Unknown Source)
2022.06.30 14:15:43 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n java.base@11.0.13/java.lang.Object.wait(Native Method)\n app//oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource$BlockReleaser.run(BlockSource.java:329)
2022.06.30 14:15:43 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [InterruptTimer] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n java.base@11.0.13/java.lang.Object.wait(Native Method)\n java.base@11.0.13/java.lang.Object.wait(Unknown Source)\n java.base@11.0.13/java.util.TimerThread.mainLoop(Unknown Source)\n java.base@11.0.13/java.util.TimerThread.run(Unknown Source)
2022.06.30 14:15:43 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Thread-5] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n java.base@11.0.13/java.net.SocketInputStream.socketRead0(Native Method)\n java.base@11.0.13/java.net.SocketInputStream.socketRead(Unknown Source)\n java.base@11.0.13/java.net.SocketInputStream.read(Unknown Source)\n java.base@11.0.13/java.net.SocketInputStream.read(Unknown Source)\n java.base@11.0.13/java.io.BufferedInputStream.fill(Unknown Source)\n java.base@11.0.13/java.io.BufferedInputStream.read1(Unknown Source)\n java.base@11.0.13/java.io.BufferedInputStream.read(Unknown Source)\n java.naming@11.0.13/com.sun.jndi.ldap.Connection.run(Unknown Source)\n java.base@11.0.13/java.lang.Thread.run(Unknown Source)
2022.06.30 14:15:43 INFO  web[][o.s.s.app.WebServer] WebServer stopped
2022.06.30 14:21:32 INFO  web[][o.s.p.ProcessEntryPoint] Starting web

I couldn’t find any obvious errors. I’d appreciate if you could find something for me to start investigating further.

Thanks,
Sunil

Hi Sunil,

With nothing in your logs, there’s nothing to investigate on the SonarQube side. This is very likely coming from outside SonarQube.

 
Ann

Do you know what might be giving a 401 here?

│ Events:                                                                                                                                                                                                                                    │
│   Type     Reason     Age                   From                                  Message                                                                                                                                                  │
│   ----     ------     ----                  ----                                  -------                                                                                                                                                  │
│   Warning  Unhealthy  43m (x202 over 14h)   kubelet, k8s-worker-green-toolsuat-5  Startup probe failed: Get "http://10.200.6.247:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)       │
│   Warning  BackOff    13m (x1510 over 13h)  kubelet, k8s-worker-green-toolsuat-5  Back-off restarting failed container                                                                                                                     │
│   Warning  Unhealthy  8m8s (x172 over 14h)  kubelet, k8s-worker-green-toolsuat-5  Startup probe failed: Get "http://10.200.6.247:9000/api/system/status": dial tcp 10.200.6.247:9000: connect: connection refused                          │
│   Warning  Unhealthy  3m1s (x655 over 14h)  kubelet, k8s-worker-green-toolsuat-5  Liveness probe failed: HTTP probe failed with statuscode: 401                                                                                            │
│ 

I don’t see any authentication part in the probes. They are simple calls from within the container.

│     Liveness:   http-get http://:http/api/system/liveness delay=60s timeout=1s period=30s #success=1 #failure=6                                                                                                                            │
│     Readiness:  exec [sh -c #!/bin/bash                                                                                                                                                                                                    │
│ # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING                                                                                                                               │
│ # status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.                                                                                                                               │
│ host="$(hostname -i || echo '127.0.0.1')"                                                                                                                                                                                                  │
│ if wget --proxy off -qO- http://${host}:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then                                                                 │
│   exit 0                                                                                                                                                                                                                                   │
│ fi                                                                                                                                                                                                                                         │
│ exit 1                                                                                                                                                                                                                                     │
│ ] delay=60s timeout=1s period=30s #success=1 #failure=6                                                                                                                                                                                    │
│     Startup:  http-get http://:http/api/system/status delay=30s timeout=1s period=10s #success=1 #failure=24                                                                                                                               │
│     Environment Variables from:                                                                                                                                                                                                            │
│       sonarqube-sonarqube-jdbc-config  ConfigMap  Optional: false                                                                                                                                                                          │
│     Environment:                                                                                                                                                                                                                           │
│       SONAR_WEB_JAVAOPTS:                                                                                                                                                                                                                  │
│       SONAR_CE_JAVAOPTS:                                                                                                                                                                                                                   │
│       SONAR_JDBC_PASSWORD:       <set to the key 'jdbc-password' in secret 'sonarqube-sonarqube'>                                 Optional: false                                                                                          │
│       SONAR_WEB_SYSTEMPASSCODE:  <set to the key 'SONAR_WEB_SYSTEMPASSCODE' in secret 'sonarqube-sonarqube-monitoring-passcode'>  Optional: false                                                                                          │
│     Mounts:                               

I’m not sure who is accessing what in this case to fix this 401. Any insight on this?

Thanks,
Sunil

Hello @sgurudu
It seems the liveness probe (http-get http://:http/api/system/liveness) of the SonarQube container is returning HTTP code 401, preventing it to be ready.
you can check manually the result of this command by doing it from within the container.
kubectl exec -it <name_of_sq_pod> -- bash
wget localhost:9000/sonarqube/api/system/liveness --header="X-Sonar-Passcode: <monitoring_pass_code>" -O- --server-response

Could you also specify the SonarQube edition and version you are using?

Thank you

Hi Leo,

I did verify that the probe is working fine from inside the container. Both with the monitoring passcode and without it.

    window.serverStatus = 'UP';
    window.instance = 'SonarQube';

As I mentioned earlier, there are absolutely no issues for the two minutes it serves. I tried it on two different k8s clusters and faced the same issue. That’s the surprising part.

Sonarqube version: 8.9.8 Community edition.
Helm chart version: latest at Sonarsource Chart Repository

Thanks,
Sunil

In the official documentation, it is stated that “If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy

In your case, I suspect that because the livenessProbe keeps failing with 401, it kills the container (explaining why there are no error logs in SonarQube)

The Weird thing is that you are getting HTTP code 401 on api/system/liveness (it should be 403 if the monitoring_pass_code was wrong) and that it is working from within the container.

Maybe check if you have any proxy (such as Istio-proxy) setup in your cluster that could cause the probe to fail