I know its been a long time since this was open. But I’m facing the issue now. Our builds run in unique docker instances, so it is not possible to preserve the plugins across builds.
@ganncamp can you provide any clarification into what the network flow is to download plugins?
Our setup:
Sonarqube (9.2)
Deployed as StatefulSet on GKE (using sonarqube provided helm chart, no modifications)
Serving traffic through GCP HTTPS LB (60 second timeout per-request)
We frequently see builds fail at the 60-second mark. Are all plugins being downloaded via a single long-running request?
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:06.773s
INFO: Final Memory: 4M/20M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarScanner execution
java.lang.IllegalStateException: Fail to download plugin [ruby] into /root/.sonar/_tmp/fileCache17047521401284787132.tmp
The plugin whiich fails changes every time, so its somewhat random. Also, we don’t use ruby, nor most of the plugins which are failing. Is it possible to disable the download of certain plugins to the scanner?
Sonarqube official helm chart, no modifications to the chart
Web JVM Xmx: 3072m
CE JVM Xmx: 3072m
K8s Resources: 4vCPU, 7Gi memory
Serving traffic through GCP HTTPS LB
initially 60 second timeout per-request, builds failed after 60 seconds
increased timeout to 180 seconds, now builds fail at 180 seconds
We frequently saw builds fail at the 60-second mark.
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:06.773s
INFO: Final Memory: 4M/20M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarScanner execution
java.lang.IllegalStateException: Fail to download plugin [ruby] into /root/.sonar/_tmp/fileCache17047521401284787132.tmp
Following recommendation from another thread, I increased the LB timeout to 180s per-request. I increased Web and CE Memory and CPU. I also enabled LB logging and enabled Prometheus collecting for Sonarqube. Now, builds are failing at 180s mark. I do not see any correlation to cpu/memory load at the time where requests time out.
GET | 5.82 MB | 180.1s | ScannerNpm/2.8.1 | https://sonarqube.xxxx.net/api/plugins/download?plugin=go&acceptCompressions=pack200
GET | 2.33 MB | 180.2s | ScannerGradle/2.8-SNAPSHOT/6.9.1 | https://sonarqube.xxxx.net/api/plugins/download?plugin=securitypythonfrontend
GET | 5.82 MB | 180.1s | ScannerGradle/3.0-SNAPSHOT/6.5 | https://sonarqube.xxxx.net/api/plugins/download?plugin=go
Now, I suspect it is related to Tomcat threads or max connections. I have all of the tomcat metrics from prometheus, but I can’t find any metrics which spike or correlate to the failed requests.
Still curious about finding the root cause of this issue. But to mitigate it, I thought about enabling CDN caching of plugins. However, I see that sonarqube responds with the following headers preventing caching:
Could it be possible to configure the server to allow CDN caching at the network layer, to reduce the number of requests for plugins? Specifically overriding the cache-control header to something like Cache-Control: max-age=3600.
It would be server operators responsibility to control freshness of the cache, and invalidating the cache when upgrading the server.
Version 9.3 (build 51899) on Ec2-instance.
AWS Loadbalancer Controller
using this Github action file to trigger analysis on Sonar host
name: Hello World-GITHUBRUNNER
on:
push:
branches:
- main # or the name
jobs:
build:
name: Hello World-GITHUBRUNNER
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- name: Set up JDK 11
uses: actions/setup-java@v1
with:
java-version: 11
- name: Cache SonarQube packages
uses: actions/cache@v1
with:
path: ~/.sonar/cache
key: ${{ runner.os }}-sonar
restore-keys: ${{ runner.os }}-sonar
- name: Cache Maven packages
uses: actions/cache@v1
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles(’**/pom.xml’) }}
restore-keys: ${{ runner.os }}-m2
- name: Build and analyze
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: SONAR HOSTURL
run: mvn -X -B verify org.sonarsource.scanner.maven:sonar-maven-plugin:sonar -Dsonar.web.http.keepAliveTimeout=3600 -Dsonar.projectKey=github-runner-repo
I’ve increased the time-out on AWS Loadbalancer in AWS Console to 3600 & passed this -Dsonar.web.http.keepAliveTimeout=3600 argument in the command but still action fails with plugin download error