Sonarscan: Unable to load component class org.sonar.scanner.bootstrap.ScannerPluginInstaller

Sonarscan is failing in our bitbucket pipeline in production.

Attached is the log file of our pipeline run:

pipelineLog-{0000a8d5-517a-4552-8056-87d3f27493d0}.txt (19.0 KB)

Hi,

Did you read these logs? Did you notice these lines:

18:22:01.633 DEBUG: GET 500 https://sonarqube.powerdigital.io/api/settings/values.protobuf | time=529ms
...
Caused by: org.sonarqube.ws.client.HttpException: Error 500 on https://sonarqube.powerdigital.io/api/settings/values.protobuf : <!doctype html><html lang="en"><head><title>HTTP Status 500 – Internal Server Error</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 500 – Internal Server Error</h1></body></html>

Did you check your server logs?

 
Ann

I saw that error. We are running sonar in our bitbucket pipeline so it’s making the call from the sonar plugin from bitbucket pipelines. I don’t think we would be able to see the server logs for that because I think the plugin is failing the call. I wonder if it’s something wrong with the bitbucket pipeline process or something.

Hi,

You need to get to the bottom of this. The bootstrap call is the first one the scanner makes during analysis, to… bootstrap the analysis. If this call fails, then the scanner can’t talk to the server and there’s no point in continuing.

So either there’s a problem on the server, which you’ll need the server logs to diagnose, or there’s some intermediary on your network that needs to be dealt with.

 
HTH,
Ann

The other issue is no one on our team can login to sonarqube to our dashboard https://sonarqube.powerdigital.io/. We were all able to login before, so I’m wondering if something happened with our license or account to cause this

Hi,

The first call in analysis returns a 500 error, and no one can log in. This is not about your license. Again, check your server logs.

(All an expired license will do is keep you from processing an analysis report, server-side. Everything else, especially the things you’ve talked about here) will work.

 
Ann

I was able to get server log access.

So I read the latest es.log file and I see the following error:

qube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 0b[0%], all indices on this node will be marked read-only

2023.01.10 19:24:19 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 0b[0%], all indices on this node will be marked read-only

2023.01.10 19:24:49 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 0b[0%], all indices on this node will be marked read-only

2023.01.10 19:25:05 ERROR es[][o.e.m.f.FsHealthService] health check of [/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] failed

java.io.IOException: No space left on device

at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]

at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62) ~[?:?]

at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) ~[?:?]

at sun.nio.ch.IOUtil.write(IOUtil.java:79) ~[?:?]

at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280) ~[?:?]

at java.nio.channels.Channels.writeFullyImpl(Channels.java:74) ~[?:?]

at java.nio.channels.Channels.writeFully(Channels.java:97) ~[?:?]

at java.nio.channels.Channels$1.write(Channels.java:172) ~[?:?]

at java.io.OutputStream.write(OutputStream.java:122) ~[?:?]

at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.monitorFSHealth(FsHealthService.java:161) [elasticsearch-7.14.1.jar:7.14.1]

at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.run(FsHealthService.java:135) [elasticsearch-7.14.1.jar:7.14.1]

at org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:203) [elasticsearch-7.14.1.jar:7.14.1]

at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) [elasticsearch-7.14.1.jar:7.14.1]

at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.14.1.jar:7.14.1]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]

at java.lang.Thread.run(Thread.java:829) [?:?]

2023.01.10 19:25:19 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 0b[0%], all indices on this node will be marked read-only

2023.01.10 19:25:49 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 0b[0%], all indices on this node will be marked read-only

2023.01.10 19:26:19 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] f2023.01.10 23:59:19 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 1008.8mb[0.6%], all indices on this node will be marked read-only

2023.01.10 23:59:49 WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [8g_fTTzWRz2xUOKtGGX6xA][sonarqube][/opt/sonarqube/sonarqube-9.1.0.47736/data/es7/nodes/0] free: 1008.7mb[0.6%], all indices on this node will be marked read-only

So maybe we have to increase our disk size for scanning?

Hi,

Something like that. Or delete some stuff.

 
Ann

Also getting this error in the web.log file

2023.01.11 11:35:55 WARN web[][o.a.c.d.BasicDataSource] An internal object pool swallowed an Exception.

org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303)

at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)

at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)

at org.postgresql.Driver.makeConnection(Driver.java:465)

at org.postgresql.Driver.connect(Driver.java:264)

at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:55)

at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:355)

at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)

at org.apache.commons.pool2.impl.GenericObjectPool.ensureIdle(GenericObjectPool.java:968)

at org.apache.commons.pool2.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:946)

at org.apache.commons.pool2.impl.BaseGenericObjectPool$Evictor.run(BaseGenericObjectPool.java:1148)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)

at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:829)

Caused by: java.net.ConnectException: Connection refused (Connection refused)

at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)

at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)

at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)

at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)

at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

at java.base/java.net.Socket.connect(Socket.java:609)

at org.postgresql.core.PGStream.createSocket(PGStream.java:231)

at org.postgresql.core.PGStream.<init>(PGStream.java:95)

at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)

at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)

... 16 common frames omitted

Not sure if this is a database connection issue on our end (assuming so?)

I didn’t set any of this up so I’m really just guessing at the moment

Hi,

Yes, that looks related to the DB. Get your filespace issue cleaned up and see if this recurs.

 
Ann

So our principal engineer restarted postgres after cleaning up some space on the server. Now we can login to the dashboard for sonarqube which is great!

Need to still test the pipeline, but this may have caused the issue!