Sonar-scanner killed when analyzing large .sql file with 100k+ LOC

Hi Everyone!

I have the same issue as mentioned here; I get the following error in spite of increasing the memory.

18:52:55.014 INFO: Sensor PL/SQL Sensor [plsql]

18:52:55.042 WARN: The Data Dictionary is not configured for PLSQL analyzer which prevents rule(s) S3641, S3921, S3618, S3651 to raise issues. See https://sonarcloud.io/documentation/analysis/languages/plsql/

18:52:55.049 INFO: 1 source files to be analyzed

18:52:56.091 DEBUG: 'wp-theme/trullio.sql' generated metadata with charset 'UTF-8'

18:53:05.049 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:53:15.049 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:53:25.050 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:53:35.050 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:53:45.050 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:53:55.050 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

18:54:05.050 INFO: 0/1 files analyzed so far, currently analyzing: /opt/atlassian/pipelines/agent/build/wp-theme/trullio.sql

/usr/bin/run-scanner.sh: line 24: 11 Killed sonar-scanner "${ALL_ARGS[@]}" 2>&1

12 Done | tee "${SCANNER_REPORT}"

✖ SonarCloud analysis failed.

This is my bitbucket pipeline:

image: node:14.5.0
clone:
  depth: full

definitions:

  services:
    docker:
      memory: 2048

  steps:
    - step: &build-test-sonarcloud
        size: 2x
        caches:
          - node
        script:
          #- // build and test 
          - pipe: sonarsource/sonarcloud-scan:1.2.0
            variables:
              DEBUG: 'true'
              SONAR_SCANNER_OPTS: -Xmx2048m
              SONAR_TOKEN: ${SONAR_TOKEN}
              EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io
    - step: &check-quality-gate-sonarcloud
        name: Check the Quality Gate on SonarCloud
        script:
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.4
            variables:
              DEBUG: 'true'
              SONAR_TOKEN: ${SONAR_TOKEN}
              EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io


pipelines:
   default:
     - step: *build-test-sonarcloud
     - step: *check-quality-gate-sonarcloud

Kindly provide some insights!

@janos This is my pipeline, I increased the memory to 3GB and it still fails with the same error; any ways to increase it further. I think 4GB is the default max you can go to

image: node:14.5.0
clone:
  depth: full

definitions:

  services:
    docker:
      memory: 3072

  steps:
    - step: &build-test-sonarcloud
        size: 2x
        caches:
          - node
        script:
          #-dfvdfv
          - pipe: sonarsource/sonarcloud-scan:1.2.0
            variables:
              DEBUG: 'true'
              SONAR_SCANNER_OPTS: -Xmx3072m
              SONAR_TOKEN: ${SONAR_TOKEN}
              EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io
    - step: &check-quality-gate-sonarcloud
        name: Check the Quality Gate on SonarCloud
        script:
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.4
            variables:
              DEBUG: 'true'
              SONAR_SCANNER_OPTS: -Xmx1024m
              SONAR_TOKEN: ${SONAR_TOKEN}
              EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io


pipelines:
   default:
     - step: *build-test-sonarcloud
     - step: *check-quality-gate-sonarcloud

I think you can still go higher, up to 7G.

How many files do you have in this project, approximately, and what are the languages in it? If you have a lot of files, it can be understandable that a lot of memory is required. The memory needs also depend on the languages analyzed. Based on the basic statistics about the number of files per language, I can ask the language analyzer teams to see if the memory needs look suspicious.

Hi @janos the analysis fails at a .sql (trulioo.sql file from the error logs above) file that has 100,000+ LOC. When I exclude the particular file from the analysis the scanner passes without any memory crashes.

Do you see any way to overcome this?

The languages used in this project are JavaScript, CSS, PHP, TypeScript

This is my pipeline:

clone:
depth: full

definitions:

services:
docker:
memory: 3072

steps:
- step: &build-test-sonarcloud
size: 2x
caches:
- node
script:
- pipe: sonarsource/sonarcloud-scan:1.2.0
variables:
DEBUG: ‘true’
SONAR_SCANNER_OPTS: -Xmx7168m
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io #-Dsonar.exclusions=**/trullio.sql
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
variables:
DEBUG: ‘true’
SONAR_SCANNER_OPTS: -Xmx1024m
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: -Dsonar.host.url=https://sonarcloud.io

pipelines:
default:
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud

Hi,

Do you really want to analyze that file?

  • Is that file part of the project you maintain or is it part of a dependency of your project? We usually advise to exclude 3rd party code.
  • I may have found the file on github. It’s a SQL which contains mainly data (INSERT statements), almost no logic. It’s quite large (24.3 MB). And it targets MySQL for which we don’t have any analyzer. Our PL/SQL analyzer may be able to parse it but it will almost certainly not find anything interesting in this kind of file.
1 Like

Hi @pynicolas,

Thank you for your insights, they were very helpful. We have decided not to analyze this file. Kindly close this ticket!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.