NextJS, SonarCloud and Bitbucket scans crashing


We are running a NextJS site which consists of lots of microservices in tree format:

apps/app2 (etc etc)
libs/otherstuff (etc etc)

Each directory has it’s own tsconfig.json and src directory.

  • ALM + CI used: Bitbucket Cloud
  • Scanner command:
    - step: &sonarScanLibsShared
        name: Sonar Scan Libs - Shared
        size: 2x
          - sonar
          - pipe: sonarsource/sonarcloud-scan:1.4.0
              SONAR_SCANNER_OPTS: -Xmx256m
              EXTRA_ARGS: '-Dsonar.projectBaseDir=libs/shared -Dsonar.javascript.node.maxspace=7168'

The ‘active’ bit of our file is:

sonar.test.inclusions= **/*.spec.{js,jsx,ts,tsx},**/*.test.{js,jsx,ts,tsx},**/*.{story,stories}.{js,jsx,ts,tsx}
  • Languages of the repository - NextJS / React
  • Error observed:

When running sonarscan - a single file repeats and then a crash is seen most of the time - but not always - e.g:

INFO: 124/203 files analyzed, current file: /opt/atlassian/pipelines/agent/build/libs/marketing-site/util-api/src/lib/marketing-site-util-api.spec.ts
INFO: 124/203 files analyzed, current file: /opt/atlassian/pipelines/agent/build/libs/marketing-site/util-api/src/lib/marketing-site-util-api.spec.ts
ERROR: eslint-bridge Node.js process is unresponsive. This is most likely caused by process running out of memory. Consider setting sonar.javascript.node.maxspace to higher value (e.g. 4096).
ERROR: Failure during analysis, Node.js command to start eslint-bridge was: node --max-old-space-size=7168 /opt/atlassian/pipelines/agent/build/libs/marketing-site/.scannerwork/.sonartmp/eslint-bridge-bundle/package/bin/server 45083 /opt/atlassian/pipelines/agent/build/libs/marketing-site/.scannerwork true false /opt/atlassian/pipelines/agent/build/libs/marketing-site/.scannerwork/.sonartmp/eslint-bridge-bundle/package/custom-rules980193960290417194/package
java.lang.IllegalStateException: eslint-bridge is unresponsive
	at org.sonar.plugins.javascript.eslint.EslintBridgeServerImpl.request(

But the file it “trips” on is not always the same between runs.

This looks to be a memory issue - but the codebase isn’t that big (the whole apps directory is 114 files - the whole libs directory is 1k. They’re all small files - the only one over 0.5Mb is package-lock.json.

We’ve tried targetting sections via the monorepo feature - (so the command above is for just for libs/marketing-site for example)

We’ve also given the process the max memory Bitbucket will allow (7Gb for the node space - 1Gb shared between sonarscan and the general container) - but still no dice.

Any ideas what we can do to fix it?

I am experiencing this exact same issue on a NextJS project using BitBucket pipelines.

bitbucket-pipelines.yml (snippet)

    - step: &code_quality_scan
        size: 2x
        # clone:
        #   depth: full    # SonarCloud scanner needs the full history to assign issues properly
        name: Analyse code quality with SonarCloud
        # caches:
        #   - sonar
          - docker
          - pipe: sonarsource/sonarcloud-scan:1.4.0
              SONAR_SCANNER_OPTS: -Xmx7168m
              EXTRA_ARGS: '-Dsonar.sources=. -Dsonar.inclusions=**/*.tsx -Dsonar.javascript.node.maxspace=7168'
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.6
          memory: 7168

SonarCloud output (snippet)

INFO: 125/352 files analyzed, current file: /opt/atlassian/pipelines/agent/build/features/x/components/x-view.tsx
INFO: 125/352 files analyzed, current file: /opt/atlassian/pipelines/agent/build/features/x/components/x-view.tsx
INFO: 125/352 files analyzed, current file: /opt/atlassian/pipelines/agent/build/features/x/components/x-view.tsx
INFO: Time spent writing ucfgs 18456ms
ERROR: Failure during analysis, Node.js command to start eslint-bridge was: node --max-old-space-size=7168 /opt/atlassian/pipelines/agent/build/.scannerwork/.sonartmp/eslint-bridge-bundle/package/bin/server 37333 /opt/atlassian/pipelines/agent/build/.scannerwork true false /opt/atlassian/pipelines/agent/build/.scannerwork/.sonartmp/eslint-bridge-bundle/package/custom-rules18209903129993414625/package
java.lang.IllegalStateException: eslint-bridge server is not answering
	at org.sonar.plugins.javascript.eslint.AnalysisWithProgram.analyze(
	at org.sonar.plugins.javascript.eslint.AnalysisWithProgram.analyzeProgram(
	at org.sonar.plugins.javascript.eslint.AnalysisWithProgram.analyzeFiles(


Welcome to the community, both of you!

Can one of you share the file analysis stalls on?


1 Like

Hey, sorry I’m not able to share the source for my project as it is for work but I can tell you that it stalls on completely different files each time as the original poster also mentioned. The files it has stalled on aren’t excessive in terms on lines of code, we’re talking 100-200 lines. Hope that helps but sorry can’t provide much more here.

This is correct - I’ve seen 5 to 10 different files - it’s like an external resource (RAM?) is running out - and the file in question is just the file that it’s reading ‘when it happens’. As it happens - the file it choked on in the example I posted was actually a placeholder fuction - it was 7 lines long - and all 7 were comment out.


Sorry, I misread the OP. I thought it was always stopping on the same file.

So this sounds like a question of resources on your BB build agent.

This might help:


Hi Ann,

Thanks for the response - unfortunately, as you can see from my original post - and Adam’s follow up - we’ve tried that option, both at the top and bottom of the range (256 and 7168).

The other option I’ve tried is sonar.javascript.node.maxspace=7168 (with various numbers from 1Gb to 7Gb) - and although it seems to help get further - we still crash out regularly with the ESLint error.

Splitting it up into sub directories (using the monorepo option) helps a bit (in that at least 50% of our code base now scans cleanly) but we are still hitting issues with some directories blowing the memory limit. I’m wondering if the software is memory leaking - or whether we just need to disable this particular check - as it seems to be intensely greedy?


I’ve flagged this for more expert attention.



Hey Ann, I’ve reached the end of my trial so I’ve had to stop my testing. It would be great if I could get a fresh trial once we’ve established a resolution, hope that is possible! All the best, Adam

hi @morph42 ,

thanks for reporting this issue, I will try to help to troubleshoot it.

How did it go with monorepo approach? Did you have the same failure? I would expect this to work better for bigger projects.

Could you please post full debug logs? (you can do it also privately if you are concerned about exposing details from your project)

1 Like

Hi Tibor,

Thanks for the response… the monorepo approach half fixed it - in that some directories now scan cleanly - but others are still crashing.

Can you explain how to generate a debug log vs just the normal logging (which I’ve already quoted?) How would I send it to you privately?