We have been using sonarcloud for around 10 months now with no issues. We use the Github Action called Sonarcloud Github Action within our CICD. On each commit that gets pushed to an open PR, we run this step as such:
- name: SonarCloud Scan
if: ${{ github.actor != 'dependabot[bot]' }}
uses: sonarsource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
I compared the logs for the PR’s that I am referring to vs Actions we had just last week to make sure there wasnt a version change. Both (both meaning successful Action from last week and Action from today) PR’s are using the same version: SonarScanner 4.8.0.2856
However, the difference seems to be all of a sudden after 10 months its trying to scan every single file, which after some time it just runs out of memory. I’ll post logs (going to only include a few lines so I can redact file names I don’t want public, but you can trust that its been scanning every file based on the file number):
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=2ms
INFO: Sensor JavaScript/TypeScript analysis [javascript]
INFO: 279 source files to be analyzed
INFO: 2/279 files analyzed, current file: /github/workspace/lambdas/redacted
INFO: 3/279 files analyzed, current file: /github/workspace/lambdas/redacted
INFO: 4/279 files analyzed, current file: /github/workspace/lambdas/redacted
...
INFO: 102/279 files analyzed, current file: /github/workspace/lambdas/redacted
INFO: 102/279 files analyzed, current file: /github/workspace/lambdas/redacted
ERROR:
ERROR: <--- Last few GCs --->
ERROR:
ERROR: [63:0x7fc7972252d0] 878745 ms: Mark-sweep (reduce) 2043.9 (2083.5) -> 2043.5 (2084.0) MB, 214.4 / 0.0 ms (+ 1213.2 ms in 198 steps since start of marking, biggest step 24.7 ms, walltime since start of marking 1776 ms) (average mu = 0.312, current mu = [63:redacted] 881153 ms: Mark-sweep (reduce) 2044.6 (2084.0) -> 2043.4 (2084.3) MB, 2404.9 / 0.0 ms (average mu = 0.158, current mu = 0.001) allocation failure; scavenge might not succeed
ERROR:
ERROR:
ERROR: <--- JS stacktrace --->
ERROR:
ERROR: FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
If you notice the 102/279
this is what i am referring to, in regards to it’s trying to scan every file. Here’s a sample of the successful log where it doesnt scan every file like it is above:
Sensor JaCoCo XML Report Importer [jacoco]
INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=3ms
INFO: Sensor JavaScript analysis [javascript]
INFO: Creating TypeScript program
INFO: TypeScript configuration file /github/workspace/.scannerwork/.sonartmp/redacted.tmp
INFO: 273 source files to be analyzed
INFO: Creating TypeScript program (done) | time=6881ms
INFO: Starting analysis with current program
INFO: 5/273 files analyzed, current file: /github/workspace/lambdas/redacted
INFO: Analyzed 273 file(s) with current program
INFO: 273/273 source files have been analyzed
I haven’t changed anything since I created this step in our CICD 10 months ago. Just curious what could have happened and what I can do to resolve. I’m confused on what the issue is, or even where to start to figure out how to fix. Google hasn’t really had anything happen in the last week or month from the results I looked at, so I don’t think its something widespread. Can’t imagine what could have changed though since we’ve been using this for so long with no issues.
I have tried to let it just run, I thought maybe some cache got wiped or something happened where it cleared previous data somehow. So maybe running it once would make future actions fast again, but trying this out I saw that the 2nd Action took just as long and same behavior happens.