Sonar scanner fails to create temp files in Gitlab

I’m using following job configuration:

sonarqube:
  stage: deploy
  image:
    name: sonarsource/sonar-scanner-cli:latest
    entrypoint: ['']
  variables:
    SONAR_USER_HOME: $CI_PROJECT_DIR/.sonar
    GIT_DEPTH: 0
  script:
    - sonar-scanner -Dsonar.qualitygate.wait=true -Dsonar.branch.name=$CI_COMMIT_REF_NAME
  allow_failure: true
  when: master

Here’s the outcome:

$ sonar-scanner -Dsonar.qualitygate.wait=true -Dsonar.branch.name=$CI_COMMIT_REF_NAME
INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /builds/xxx/sonar-project.properties
INFO: SonarScanner 4.3.0.2102
INFO: Java 11.0.3 AdoptOpenJDK (64-bit)
INFO: Linux 3.10.0-1062.1.1.el7.x86_64 amd64
INFO: User cache: /builds/web-modules/web-modules-core/.sonar/cache
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 3.681s
INFO: Final Memory: 3M/17M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarScanner execution
org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
	at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
	at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
	at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
	at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
	at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
	at org.sonarsource.scanner.cli.Main.main(Main.java:61)
Caused by: java.lang.IllegalStateException: Fail to create temp file in /builds/xxx/.sonar/cache/_tmp
	at org.sonarsource.scanner.api.internal.cache.FileCache.newTempFile(FileCache.java:138)
	at org.sonarsource.scanner.api.internal.cache.FileCache.get(FileCache.java:83)
	at org.sonarsource.scanner.api.internal.JarDownloader.lambda$getScannerEngineFiles$0(JarDownloader.java:60)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
	at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
	at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
	at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
	at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:61)
	at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
	at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
	... 7 more
Caused by: java.nio.file.AccessDeniedException: /builds/xxx/.sonar/cache/_tmp/fileCache17135943193194375565.tmp
	at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(Unknown Source)
	at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
	at java.base/java.nio.file.Files.createFile(Unknown Source)
	at java.base/java.nio.file.TempFileHelper.create(Unknown Source)
	at java.base/java.nio.file.TempFileHelper.createTempFile(Unknown Source)
	at java.base/java.nio.file.Files.createTempFile(Unknown Source)
	at org.sonarsource.scanner.api.internal.cache.FileCache.newTempFile(FileCache.java:136)
	... 19 more
ERROR: 
ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.

Hi,

Does the user running the analysis job have permissions to that directory?

 
Ann

All other jobs work just fine. I even have created that directory and tried to set permissions:

$ mkdir -p .sonar/cache/_tmp
$ find .sonar -type d -exec chmod 777 {} \;
chmod: changing permissions of '.sonar': Operation not permitted
chmod: changing permissions of '.sonar/_tmp': Operation not permitted
chmod: changing permissions of '.sonar/cache': Operation not permitted

As you see it failed. I have cached this directory for the job:

cache:
  key: $CI_COMMIT_REF_NAME-sonar
  paths:
    - .sonar
    - .scannerwork

I deleted the cache and it seems to work now.

Unfortunately, it got stuck and was killed by timeout:

INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /builds/xxx/sonar-project.properties
INFO: SonarScanner 4.3.0.2102
INFO: Java 11.0.3 AdoptOpenJDK (64-bit)
INFO: Linux 3.10.0-1062.1.1.el7.x86_64 amd64
INFO: User cache: /builds/xxx/.sonar/cache
INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /builds/xxx/sonar-project.properties
INFO: Analyzing on SonarQube server 8.2.0
INFO: Default locale: "en_US", source code encoding: "US-ASCII" (analysis is platform dependent)
INFO: Load global settings
INFO: Load global settings (done) | time=1730ms
INFO: Server id: XXX
INFO: User cache: /builds/xxx/.sonar/cache
INFO: Load/download plugins
INFO: Load plugins index
INFO: Load plugins index (done) | time=326ms

Our custom docker image with SonarQube usually works for around 5-10 minutes.

My guess it could be related to this line in Dockerfile:

USER scanner-cli which prevented cache to be used.

Hi,

We try to keep it to one issue per thread. Perhaps you’d like to create a new thread for the timeout?

 
Ann

This thread is about caching. Timeout issue is irrelevant. Here’s the issue for timeout: https://github.com/SonarSource/sonar-scanner-cli-docker/issues/51

Hi,

Then I guess this thread is closed?

 
Ann

I don’t think so. Cache is shared and supposed to be writable and editable by any job. Resetting cache all the time is not the solution, it defeats it’s purpose… so please check how and why SQ image can access it. In my case cache was created before I swapped to SQ official image.

Hi,

You had one job that was failing with permission errors. You deleted the file, therefore permissions were reset when the job ran again - and succeeded. Did the permissions problem come back?

 
Ann

It did came back on jobs that picked up previous caches. I suggest to check permissions and fail with a more self-explanatory error message, current error is quite misleading. Also, is this user change really necessary?

The issue has reproduced again. It is consistent that scanner fails to use the Gitlab’s cache.

Hi @kirill-konshin,
I can’t reproduce this with both on-premise and cloud version of GitLab so I’ll need more details on your setup. In the other thread you mentioned you are using on-premise GitLab. Are you starting your gitlab-runners in user-mode or system-mode?

We run docker images that execute jobs, so it has own set of permissions, not tied to host. Gitlab runner is running under root.

Ah, I missed the point where you mentioned you’ve cleared the cache that was created by a container from different image and from that point the permission issue was gone but instead you started to get a timeout problem. These are two different issues, we shouldn’t mix them.

  1. The caching problem: the user defined in the image might have trouble accessing the cache that was created by a container from different image. Since you’ve cleared the cache, I don’t think you will have issues with caching in the future as long as you stay on our images. The problem here is the second issue you are running into (the timeout out) prevents GitLab from saving the new cache since GitLab only does so on successful builds. So unless I’m not seeing something, we now only have to deal with the timeout problem and once we fix it, the first successful analysis will save the cache for next runs.

  2. The timeout issue: it’s not clear from the logs what’s wrong. Can you please run the scanner with sonar.verbose=true, rerun the analysis and post the output? The full config of your job should look like this after adding that parameter:

  stage: deploy
  image:
    name: sonarsource/sonar-scanner-cli:latest
    entrypoint: ['']
  variables:
    SONAR_USER_HOME: $CI_PROJECT_DIR/.sonar
    GIT_DEPTH: 0
  script:
    - sonar-scanner -Dsonar.verbose=true -Dsonar.qualitygate.wait=true -Dsonar.branch.name=$CI_COMMIT_REF_NAME
  allow_failure: true
  when: master

The caching problem has reproduced again with the same image. I’ve mentioned it few posts earlier. From what I see is that Gitlab unpacks the cache under different user, because for unknown reason your image changes the user. What’s that for?

I am running jobs with verbose flag, I will post the result here.

Got new error:

22:53:58.856 WARN: Failed to close server
java.net.ConnectException: Failed to connect to localhost/127.0.0.1:46852
	at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:249)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:167)
	at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
	at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
	at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)

No timeouts this time.

I will need a bigger log fragment, it’s hard to judge what happened based on one log on WARN level.

The image is used in many different scenarios, not only in GitLab pipelines. Adding USER ... makes sure the container is not run as root.

What is wrong to run as root? It’s a docker image used for processing source codes in a private controlled environment… it already has access to sensitive data, no matter which user.

If you insist on using user, and it is really the reason why caching is not working, then you have to provide one more image that can be used specifically with Gitlab. Caching is a must have functionality.

I’ve checked logs with this port:

22:52:52.625 DEBUG: starting eslint-bridge server at port 46852
22:52:52.648 DEBUG: eslint-bridge server is running at port 46852
...
22:53:45.125 ERROR: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
22:53:45.125 INFO: 
22:53:45.125 INFO: <--- Last few GCs --->
22:53:45.125 INFO: 
22:53:45.125 INFO: [104:0x2a4e120]    51276 ms: Scavenge 1390.6 (1423.8) -> 1390.3 (1424.8) MB, 8.1 / 0.0 ms  (average mu = 0.157, current mu = 0.022) allocation failure 
22:53:45.125 INFO: [104:0x2a4e120]    51911 ms: Mark-sweep 1391.0 (1424.8) -> 1390.5 (1424.3) MB, 628.7 / 0.0 ms  (average mu = 0.103, current mu = 0.036) allocation failure scavenge might not succeed
22:53:45.125 INFO: [104:0x2a4e120]    51922 ms: Scavenge 1391.3 (1424.3) -> 1390.9 (1425.3) MB, 5.8 / 0.0 ms  (average mu = 0.103, current mu = 0.036) allocation failure 
22:53:45.125 INFO: 
22:53:45.125 INFO: 
22:53:45.125 INFO: <--- JS stacktrace --->
22:53:45.125 INFO: 
22:53:45.125 INFO: ==== JS stack trace =========================================
22:53:45.126 INFO: 
22:53:45.126 INFO:     0: ExitFrame [pc: 0x32eddb85be1d]
22:53:45.126 INFO:     1: StubFrame [pc: 0x32eddb80d40b]
22:53:45.126 INFO:     2: ConstructFrame [pc: 0x32eddb80cfa3]
22:53:45.126 INFO: Security context: 0x0bc55631e6e9 <JSObject>
22:53:45.126 INFO:     3: parseParameter(aka parseParameter) [0x13dd2f07ca39] [/opt/nodejs/lib/node_modules/typescript/lib/typescript.js:~19244] [pc=0x32eddbbd155e](this=0x168ae12826f1 <undefined>)
22:53:45.126 INFO:     4: parseDelimitedList(aka parseDelimitedList) [0x13dd2f07c339] [/opt/nodejs/lib/node_modules/typ...
22:53:45.126 INFO: 
22:53:45.126 ERROR:  1: 0x8fa0c0 node::Abort() [node]
22:53:45.127 ERROR:  2: 0x8fa10c  [node]
22:53:45.127 ERROR:  3: 0xb0026e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
22:53:45.128 ERROR:  4: 0xb004a4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
22:53:45.128 ERROR:  5: 0xef49b2  [node]
22:53:45.129 ERROR:  6: 0xef4ab8 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
22:53:45.130 ERROR:  7: 0xf00b92 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
22:53:45.130 ERROR:  8: 0xf014c4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
22:53:45.131 ERROR:  9: 0xf04131 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
22:53:45.131 ERROR: 10: 0xecd5b4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
22:53:45.132 ERROR: 11: 0x116d73e v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
22:53:45.132 ERROR: 12: 0x32eddb85be1d 
22:53:55.760 INFO: 49/256 files analyzed, current file: core/src/core/ui/CTADropdown.tsx
22:53:56.804 ERROR: Failed to get response while analyzing core/src/core/ui/CTADropdown.tsx
java.net.ConnectException: Failed to connect to localhost/127.0.0.1:46852
	at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:249)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:167)
	at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)

This error happens further down the pipeline during the analysis so I think we are making some progress. It seems node is running out of memory now. Can you check if adding one more variable in variables section of gitlab-ci.yml helps with this?

...
  variables:
    NODE_OPTIONS=--max-old-space-size=8192
    SONAR_USER_HOME: $CI_PROJECT_DIR/.sonar
    GIT_DEPTH: 0
...

Same thing:

19:44:15.615 INFO: 168/256 files analyzed, current file: core/src/core/ui/list/PureList.tsx
19:44:18.718 ERROR: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
19:44:18.718 INFO: 
19:44:18.719 INFO: <--- Last few GCs --->
19:44:18.719 INFO: 
19:44:18.719 INFO: [103:0x3e10150]   285353 ms: Mark-sweep 8152.9 (8330.2) -> 8152.7 (8330.7) MB, 6048.2 / 0.0 ms  (average mu = 0.115, current mu = 0.012) allocation failure scavenge might not succeed
19:44:18.719 INFO: [103:0x3e10150]   290842 ms: Mark-sweep 8153.4 (8330.7) -> 8152.9 (8330.2) MB, 5478.5 / 0.0 ms  (average mu = 0.059, current mu = 0.002) allocation failure scavenge might not succeed
19:44:18.719 INFO: 
19:44:18.719 INFO: 
19:44:18.719 INFO: <--- JS stacktrace --->
19:44:18.719 INFO: 
19:44:18.719 INFO: ==== JS stack trace =========================================
19:44:18.719 INFO: 
19:44:18.719 INFO:     0: ExitFrame [pc: 0x928b6c5be1d]
19:44:18.719 INFO: Security context: 0x0a553d59e6e9 <JSObject>
19:44:18.719 INFO:     1: split [0xa553d5906c9](this=0x0bc100f51a31 <String[79]: builds/xxx/core/src/core/ui/form/web-modules-test.d.ts>,0x2d48df8b1c59 <String[1]: />)
19:44:18.719 INFO:     2: toPath(aka toPath) [0x3b24d67661f1] [/opt/nodejs/lib/node_modules/typescript/lib/typescript.js:~96669] [pc=0x928b805d154](this=0x09f3fce026f1 <undefined>,fileName=0x0bc100f51951 <S...
19:44:18.719 INFO: 
19:44:18.720 ERROR:  1: 0x8fa0c0 node::Abort() [node]
19:44:18.720 ERROR:  2: 0x8fa10c  [node]
19:44:18.720 ERROR:  3: 0xb0026e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
19:44:18.721 ERROR:  4: 0xb004a4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
19:44:18.721 ERROR:  5: 0xef49b2  [node]
19:44:18.722 ERROR:  6: 0xef4ab8 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
19:44:18.723 ERROR:  7: 0xf00b92 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
19:44:18.723 ERROR:  8: 0xf014c4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
19:44:18.724 ERROR:  9: 0xf04131 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
19:44:18.724 ERROR: 10: 0xed3d7b v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [node]
19:44:18.725 ERROR: 11: 0xed53dd v8::internal::Factory::NewProperSubString(v8::internal::Handle<v8::internal::String>, int, int) [node]
19:44:18.725 ERROR: 12: 0x11ae23c v8::internal::Runtime_StringSplit(int, v8::internal::Object**, v8::internal::Isolate*) [node]
19:44:18.725 ERROR: 13: 0x928b6c5be1d 
19:44:25.616 INFO: 168/256 files analyzed, current file: core/src/core/ui/list/PureList.tsx
19:44:35.618 INFO: 168/256 files analyzed, current file: core/src/core/ui/list/PureList.tsx
19:44:43.296 ERROR: eslint-bridge Node.js process is unresponsive. This is most likely caused by process running out of memory. Consider setting sonar.javascript.node.maxspace to higher value (e.g. 4096).
19:44:43.300 ERROR: Failure during analysis, Node.js command to start eslint-bridge was: node /builds/xxx/.scannerwork/.sonartmp/eslint-bridge-bundle/package/bin/server 39227
java.lang.IllegalStateException: eslint-bridge is unresponsive
	at org.sonar.plugins.javascript.eslint.EslintBridgeServerImpl.request(EslintBridgeServerImpl.java:202)
	at org.sonar.plugins.javascript.eslint.EslintBridgeServerImpl.analyzeTypeScript(EslintBridgeServerImpl.java:186)
	at org.sonar.plugins.javascript.eslint.TypeScriptSensor.analyze(TypeScriptSensor.java:153)
	at org.sonar.plugins.javascript.eslint.TypeScriptSensor.analyzeFilesWithTsConfig(TypeScriptSensor.java:141)
	at org.sonar.plugins.javascript.eslint.TypeScriptSensor.analyzeFiles(TypeScriptSensor.java:121)
	at org.sonar.plugins.javascript.eslint.AbstractEslintSensor.execute(AbstractEslintSensor.java:107)
	at org.sonar.plugins.javascript.eslint.TypeScriptSensor.execute(TypeScriptSensor.java:53)