Analysis not completing for PR

Template for a good bug report, formatted with Markdown:

  • versions used (SonarQube, Scanner, Plugin, and any relevant extension)
    SonarQube 8.1 Developer Edition, Scanner 4.2.0.1873-linux, C/C++ community plugin

  • error observed (wrap logs/code around triple quote ``` for proper formatting)

Analysis never completes


Seems to only be on a particular branch but I’m not sure where the issue is.

Normally takes about 5s

  • steps to reproduce

Run analysis on a particular branch

  • potential workaround

Works fine with branch analysis but not PR analysis

Is there any warning or error in the server logs? Are other P/Rs in the same project working as expected?

ce.log has 2020.01.28 09:36:29 WARN ce[][okhttp3.OkHttpClient] A connection to https://gitlab.fphcare.com/ was leaked. Did you forget to close a response body? To see where this was allocated, set the OkHttpClient logger level to FINE: Logger.getLogger(OkHttpClient.class.getName()).setLevel(Level.FINE); but I’m not sure if that’s just because gitlab times out after 300s?

es.log has 2020.01.23 12:31:07 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state 2020.01.23 12:31:08 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]] ...]). 2020.01.23 13:31:40 WARN es[][o.e.d.i.q.BoolQueryBuilder] Should clauses in the filter context will no longer automatically set the minimum should match to 1 in the next major version. You should group them in a [filter] clause or explicitly set [minimum_should_match] to 1 to restore this behavior in the next major version.
sonar.log doesn’t seem to have any errors
web.log has a stack trace about 2020.01.28 09:37:05 ERROR web[AW/PmPv4YbK7aDkMAAWw][o.s.s.p.UpdateCenterClient] Fail to connect to update center org.sonar.api.utils.SonarException: Fail to download: https://update.sonarsource.org/update-center.properties (no proxy)

We’ve just been evaluating so don’t really have lots of PRs running but it definitely was working for a previous branch. I’ll try make another PR.

I tried posting some log messages but the system marked it as spam for review :pensive:

Thanks - the logs are now posted.
Unfortunately the weren’t that useful. Please let us know how other P/Rs go.
Also I see that you had let it run for 5min when the screenshot was done. Please see if it anything happens when running for a longer time.

So yea, if I create a branch and PR with effectively no changes from master it works fine:

So I suspect it is something to do with the code changes in the branch but just not sure how to diagnose where the problem is. Increase verbosity of logging somewhere?

Ok. Yes, you can try increasing verbosity of the CE by modifyig in conf/sonar.properties the property sonar.log.level.ce. As I said before I would also try to let it run longer to see if it ends with an error.

How long should I wait, it usually takes 5 seconds and I’ve left it for more than an hour…

It seems to be something to do with the PR decoration - I tried setting the proxy filed in the config, and when I accidentally did this wrong (I put http:// in front of the hostname) the analysis succeeded (though it obviously didn’t do the PR decoration).
Normally it seems to show this over and over again:

2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][jdk.event.security]  TLSHandshake: gitlab.fphcare.com:3128, TLSv1.3, TLS_AES_256_GCM_SHA384, -1720027755
2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][o.i.http2.Http2] >> CONNECTION 505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][o.i.http2.Http2] >> 0x00000000     6 SETTINGS
2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][o.i.http2.Http2] >> 0x00000000     4 WINDOW_UPDATE
2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][o.i.http2.Http2] >> 0x00000003  4417 HEADERS       END_HEADERS
2020.01.28 10:52:54 DEBUG ce[AW_o9JEgUsMqR9juSiAw][o.i.http2.Http2] >> 0x00000003     0 DATA          END_STREAM
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] << 0x00000000    18 SETTINGS
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] << 0x00000000     4 WINDOW_UPDATE
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] >> 0x00000000     0 SETTINGS      ACK
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] << 0x00000000     0 SETTINGS      ACK
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] << 0x00000000     8 GOAWAY
2020.01.28 10:52:54 DEBUG ce[][o.i.http2.Http2] >> 0x00000003     4 RST_STREAM

This looks similar actually GitLab CI loop of empty posts for PR decoration when quality gate fails and maybe that explains why the other PR worked, it had a successful quality gate whereas the one that loops forever has a failing quality gate.

I agree that it looks like the PR decoration is blocking.
It would be great if we could get a stack trace of the CE process to see what exactly is happening.
If you have a JDK installled (instead of JRE), you can:

  • run <JAVA_HOME>/bin/jps and find the pid of CeServer
  • run <JAVA_HOME>/bin/jstack <pid> to get the stack dump.

Thanks for the details of how to get a stack trace. Unfortunately this seems to have gone away - I’m not sure if something changed with our Gitlab server or network or whether it was to do with the fact that we upgraded from a trial license to a paid license. I’ll keep an eye on it and get a stack trace if it happens again.

Ok. Thanks for the update.