Compute Engine is Failed to Startup

Compute Engine is Failed to start and I could see the below error.

Sonarlog:

2019.09.10 04:15:44 INFO  web[][o.s.s.q.ProjectsInWarningDaemon] Counting number of projects in warning is not started as there are no projects in this situation.
2019.09.10 04:15:44 INFO  web[][o.s.s.p.p.PlatformLevelStartup] Running Community Edition
2019.09.10 04:15:44 INFO  web[][o.s.s.p.Platform] WebServer is operational
2019.09.10 04:20:29 INFO  web[][o.s.p.ProcessEntryPoint] Hard stopping process
2019.09.10 04:20:29 INFO  web[][o.s.s.n.NotificationDaemon] Notification service stopped
2019.09.10 04:20:30 WARN  web[][o.s.p.ProcessEntryPoint$HardStopperThread] Can not stop in 1000ms
2019.09.10 04:20:30 WARN  web[][o.s.s.a.EmbeddedTomcat] Failed to stop web server
org.apache.catalina.LifecycleException: Failed to stop component [StandardServer[-1]]
        at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:238)
        at org.apache.catalina.startup.Tomcat.stop(Tomcat.java:437)
        at org.sonar.server.app.EmbeddedTomcat.terminate(EmbeddedTomcat.java:104)
        at org.sonar.server.app.WebServer.hardStop(WebServer.java:83)
        at org.sonar.process.ProcessEntryPoint$HardStopperThread.lambda$new$0(ProcessEntryPoint.java:219)

ce.log

2019.09.10 04:15:46 INFO  ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2019.09.10 04:15:46 INFO  ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.09.10 04:15:48 INFO  ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2019.09.10 04:15:48 INFO  ce[][o.sonar.db.Database] Create JDBC data source for jdbc:oracle:thin:@fhc-sonar-qa.fmr.com:1521/hcrtcq1.fmr.com
2019.09.10 04:20:28 ERROR ce[][o.s.ce.app.CeServer] Compute Engine startup failed
java.lang.IllegalStateException: Fail to connect to database
        at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:90)
        at org.sonar.core.platform.StartableCloseableSafeLifecyleStrategy.start(StartableCloseableSafeLifecyleStrategy.java:40)
        at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
        at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
        at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
        at org.picocontainer.behaviors.Stored.start(Stored.java:110)
        at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
        at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
        at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
        at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:135)
        at org.sonar.ce.container.ComputeEngineContainerImpl.startLevel1(ComputeEngineContainerImpl.java:210)
        at org.sonar.ce.container.ComputeEngineContainerImpl.start(ComputeEngineContainerImpl.java:182)

Could you please help how I can fix this issue.

Hi,

Have you checked your web.log? You’ll probably find the details there of the database connection problem.

 
Ann

I got an response from StackOverflow.

I will quote here this post:

What may happening here is the Java Entropy.
Entropy is the randomness collected/generated by an operating system or application for use in cryptography or other uses that require random data
The kernel gathers the entropy and stores it is an entropy pool and makes the random character data available to the operating system processes or applications through the special files /dev/random and /dev/urandom.

Reading from /dev/random drains the entropy pool with requested amount of bits/bytes, providing a high degree of randomness often desired in cryptographic operations. In case, if the entropy pool is completely drained and sufficient entropy is not available, the read operation on /dev/random blocks until additional entropy is gathered. Due to this, applications reading from /dev/random may block for some random period of time.

In contrast to the above, reading from the /dev/urandom does not block. Reading from /dev/urandom, too, drains the entropy pool but when short of sufficient entropy, it does not block but reuses the bits from the partially read random data. This is said to be susceptible to cryptanalytical attacks. This is a theorotical possibility and hence it is discouraged to read from /dev/urandom to gather randomness in cryptographic operations.

The java.security.SecureRandom class, by default, reads from the /dev/random file and hence sometimes blocks for random period of time. Now, if the read operation does not return for a required amount of time, the Oracle server times out the client (the jdbc drivers, in this case) and drops the communication by closing the socket from its end. The client when tries to resume the communication after returning from the blocking call encounters the IO exception. This problem may occur randomly on any platform, especially, where the entropy is gathered from hardware noises.

The solution to this problem is to override the default behaviour of java.security.SecureRandom class to use the non-blocking read from /dev/urandom instead of the blocking read from /dev/random. This can be done by adding the following system property -Djava.security.egd=file:///dev/urandom to the JVM. Though this is a good solution for the applications like the JDBC drivers, it is discouraged for applications that perform core cryptographic operations like crytographic key generation.

So, could you try passing the -Djava.security.egd=file:///dev/urandom to the 3 JVMs of SonarQube (CE,Web,Elasticsearch) by modifying sonar.properties file and restarting SonarQube.

I added in the properties files and it is working.