We have SonarQube 8.8 Community Edition running on a Kubernetes cluster via the community helm chart. We have recently upgraded from image version tag 8.1-community-beta
to 8.8-community
. Things were extremely slow in the UI after the upgrade and we’ve since deleted hundreds of branches that hadn’t been deleted automatically and run vacuum full
in our postgres instance. This cleaned up a few hundred gigabytes of data and the UI now seems to be working just fine.
Unfortunately analysis is slow. We’re using the latest
-tagged version of the sonarqube-scanner
image which takes about 8 minutes to run on the project, which is fine and hasn’t changed, but the server-side analysis by the compute engine has gone from ~5 minutes to ~30 minutes.
We’ve disabled all third-party plugins and disabled duplicate detection (via a **
exclude) and set the following JVM properties:
sonar.search.javaOpts: "-Xmx4G -Xms4G"
sonar.ce.javaOpts: "-Xmx4G -Xms4G"
The following are a lightly-parsed version of the logs from the server.
| Load analysis metadata | status=SUCCESS, time=26ms |
| Initialize | status=SUCCESS, time=0ms |
| Build tree of components | components=11558, status=SUCCESS, time=40891ms |
| Validate project | status=SUCCESS, time=17ms |
| Load quality profiles | status=SUCCESS, time=308ms |
| Load Quality gate | status=SUCCESS, time=16ms |
| Load new code period | status=SUCCESS, time=17ms |
| Detect file moves | reportFiles=9180, dbFiles=9180, addedFiles=0, status=SUCCESS, time=424ms |
| Load duplications | duplications=0, status=SUCCESS, time=15793ms |
| Compute cross project duplications | status=SUCCESS, time=0ms |
| Compute size measures | status=SUCCESS, time=40314ms |
| Compute new coverage | status=SUCCESS, time=163552ms |
| Compute coverage measures | status=SUCCESS, time=13832ms |
| Compute comment measures | status=SUCCESS, time=34ms |
| Copy custom measures | status=SUCCESS, time=9ms |
| Compute duplication measures | status=SUCCESS, time=28ms |
| Compute size measures on new code | status=SUCCESS, time=38ms |
| Compute language distribution | status=SUCCESS, time=27ms |
| Compute test measures | status=SUCCESS, time=19ms |
| Compute complexity measures | status=SUCCESS, time=48ms |
| Load measure computers | status=SUCCESS, time=0ms |
| Compute Quality Profile status | status=SUCCESS, time=37ms |
| Execute component visitors | status=SUCCESS, time=189988ms |
| Checks executed after computation of measures | status=SUCCESS, time=0ms |
| Compute Quality Gate measures | status=SUCCESS, time=0ms |
| Compute Quality profile measures | status=SUCCESS, time=10ms |
| Generate Quality profile events | status=SUCCESS, time=82ms |
| Generate Quality gate events | status=SUCCESS, time=5ms |
| Check upgrade possibility for not analyzed code files. | status=SUCCESS, time=0ms |
| Persist components | status=SUCCESS, time=329ms |
| Persist analysis | status=SUCCESS, time=21ms |
| Persist analysis properties | status=SUCCESS, time=38ms |
| Persist measures | inserts=90, status=SUCCESS, time=65ms |
| Persist live measures | insertsOrUpdates=529036, status=SUCCESS, time=192873ms |
| Persist duplication data | insertsOrUpdates=0, status=SUCCESS, time=4ms |
| Persist new ad hoc Rules | status=SUCCESS, time=0ms |
| Persist issues | cacheSize=34 KB, inserts=8, updates=59, merged=0, status=SUCCESS, time=464ms |
| Persist project links | status=SUCCESS, time=0ms |
| Persist events | status=SUCCESS, time=58ms |
| Persist sources | status=SUCCESS, time=245065ms |
| Persist cross project duplications | status=SUCCESS, time=0ms |
| Enable analysis | status=SUCCESS, time=53ms |
| Update last usage date of quality profiles | status=SUCCESS, time=62ms |
| Purge db | status=SUCCESS, time=352ms |
| Index analysis | status=SUCCESS, time=8192ms |
| Update need issue sync for branch | status=SUCCESS, time=10ms |
| Send issue notifications | status=SUCCESS, time=10ms |
| Publish task results | status=SUCCESS, time=0ms |
| Trigger refresh of Portfolios and Applications | status=SUCCESS, time=0ms |
| sksExecutor] Webhooks | globalWebhooks=0, projectWebhooks=0, status=SUCCESS, time=8ms |
| sksExecutor] Pull Request Decoration | status=SUCCESS, time=0ms |
The total duration in this case was 1993457ms (33 minutes).
Warnings produced from this scan were:
Dependencies/libraries were not provided for analysis of SOURCE files. The 'sonar.java.libraries' property is empty. Verify your configuration, as you might end up with less precise results.
Unable to import 1 RuboCop report file(s).
Please check that property 'sonar.ruby.rubocop.reportPaths' is correctly configured and the analysis logs for more details.
Missing blame information for 4402 files. This may lead to some features not working correctly. Please check the analysis logs.
None of which feel from my limited knowledge sound like they’d make the scan slower.
In terms of the slowest entries:
| Persist sources | 245065 |
| Persist live measures | 192873 |
| Execute component visitors | 189988 |
| Compute new coverage | 163552 |
| Build tree of components | 40891 |
| Compute size measures | 40314 |
| Load duplications | 15793 |
| Compute coverage measures | 13832 |
| Index analysis | 8192 |
Observations:
Slow “Persist sources” and “Persist live measures” might suggest a slow database (postgres) issue, and is there any guidance here? Our postgres server feels small but it was not stressed at all during the analysis:
In this analysis we did not provide Sonar with any code coverage information, so why did “compute new coverage” take almost 3 minutes?
Is there a way to speed up “Execute component visitors” step? It does not look like this is multi-threaded, or if it is can I increase the thread count? Our server CPU usage was very low (note the graph includes the web UI, compute engine, and elasticsearch) so I do not think there were any bottlenecks or garbage collection issues:
Any advice or suggestions for where to look next would be greatly appreciated - Sonar has proved extremely valuable but we want to make sure it meets our needs before considering purchasing the hosted option. It’s also worth saying that nothing around the filesystem has changed to my knowledge - so I don’t think the bottleneck is there.