$ sonar-scanner -Dsonar.qualitygate.wait=true -Dsonar.branch.name=$CI_COMMIT_REF_NAME
INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /builds/xxx/sonar-project.properties
INFO: SonarScanner 4.3.0.2102
INFO: Java 11.0.3 AdoptOpenJDK (64-bit)
INFO: Linux 3.10.0-1062.1.1.el7.x86_64 amd64
INFO: User cache: /builds/web-modules/web-modules-core/.sonar/cache
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 3.681s
INFO: Final Memory: 3M/17M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarScanner execution
org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
at org.sonarsource.scanner.cli.Main.main(Main.java:61)
Caused by: java.lang.IllegalStateException: Fail to create temp file in /builds/xxx/.sonar/cache/_tmp
at org.sonarsource.scanner.api.internal.cache.FileCache.newTempFile(FileCache.java:138)
at org.sonarsource.scanner.api.internal.cache.FileCache.get(FileCache.java:83)
at org.sonarsource.scanner.api.internal.JarDownloader.lambda$getScannerEngineFiles$0(JarDownloader.java:60)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:61)
at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
... 7 more
Caused by: java.nio.file.AccessDeniedException: /builds/xxx/.sonar/cache/_tmp/fileCache17135943193194375565.tmp
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.createFile(Unknown Source)
at java.base/java.nio.file.TempFileHelper.create(Unknown Source)
at java.base/java.nio.file.TempFileHelper.createTempFile(Unknown Source)
at java.base/java.nio.file.Files.createTempFile(Unknown Source)
at org.sonarsource.scanner.api.internal.cache.FileCache.newTempFile(FileCache.java:136)
... 19 more
ERROR:
ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
I don’t think so. Cache is shared and supposed to be writable and editable by any job. Resetting cache all the time is not the solution, it defeats it’s purpose… so please check how and why SQ image can access it. In my case cache was created before I swapped to SQ official image.
You had one job that was failing with permission errors. You deleted the file, therefore permissions were reset when the job ran again - and succeeded. Did the permissions problem come back?
It did came back on jobs that picked up previous caches. I suggest to check permissions and fail with a more self-explanatory error message, current error is quite misleading. Also, is this user change really necessary?
Hi @kirill-konshin,
I can’t reproduce this with both on-premise and cloud version of GitLab so I’ll need more details on your setup. In the other thread you mentioned you are using on-premise GitLab. Are you starting your gitlab-runners in user-mode or system-mode?
Ah, I missed the point where you mentioned you’ve cleared the cache that was created by a container from different image and from that point the permission issue was gone but instead you started to get a timeout problem. These are two different issues, we shouldn’t mix them.
The caching problem: the user defined in the image might have trouble accessing the cache that was created by a container from different image. Since you’ve cleared the cache, I don’t think you will have issues with caching in the future as long as you stay on our images. The problem here is the second issue you are running into (the timeout out) prevents GitLab from saving the new cache since GitLab only does so on successful builds. So unless I’m not seeing something, we now only have to deal with the timeout problem and once we fix it, the first successful analysis will save the cache for next runs.
The timeout issue: it’s not clear from the logs what’s wrong. Can you please run the scanner with sonar.verbose=true, rerun the analysis and post the output? The full config of your job should look like this after adding that parameter:
The caching problem has reproduced again with the same image. I’ve mentioned it few posts earlier. From what I see is that Gitlab unpacks the cache under different user, because for unknown reason your image changes the user. What’s that for?
I am running jobs with verbose flag, I will post the result here.
22:53:58.856 WARN: Failed to close server
java.net.ConnectException: Failed to connect to localhost/127.0.0.1:46852
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:249)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:167)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
What is wrong to run as root? It’s a docker image used for processing source codes in a private controlled environment… it already has access to sensitive data, no matter which user.
If you insist on using user, and it is really the reason why caching is not working, then you have to provide one more image that can be used specifically with Gitlab. Caching is a must have functionality.
I’ve checked logs with this port:
22:52:52.625 DEBUG: starting eslint-bridge server at port 46852
22:52:52.648 DEBUG: eslint-bridge server is running at port 46852
...
22:53:45.125 ERROR: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
22:53:45.125 INFO:
22:53:45.125 INFO: <--- Last few GCs --->
22:53:45.125 INFO:
22:53:45.125 INFO: [104:0x2a4e120] 51276 ms: Scavenge 1390.6 (1423.8) -> 1390.3 (1424.8) MB, 8.1 / 0.0 ms (average mu = 0.157, current mu = 0.022) allocation failure
22:53:45.125 INFO: [104:0x2a4e120] 51911 ms: Mark-sweep 1391.0 (1424.8) -> 1390.5 (1424.3) MB, 628.7 / 0.0 ms (average mu = 0.103, current mu = 0.036) allocation failure scavenge might not succeed
22:53:45.125 INFO: [104:0x2a4e120] 51922 ms: Scavenge 1391.3 (1424.3) -> 1390.9 (1425.3) MB, 5.8 / 0.0 ms (average mu = 0.103, current mu = 0.036) allocation failure
22:53:45.125 INFO:
22:53:45.125 INFO:
22:53:45.125 INFO: <--- JS stacktrace --->
22:53:45.125 INFO:
22:53:45.125 INFO: ==== JS stack trace =========================================
22:53:45.126 INFO:
22:53:45.126 INFO: 0: ExitFrame [pc: 0x32eddb85be1d]
22:53:45.126 INFO: 1: StubFrame [pc: 0x32eddb80d40b]
22:53:45.126 INFO: 2: ConstructFrame [pc: 0x32eddb80cfa3]
22:53:45.126 INFO: Security context: 0x0bc55631e6e9 <JSObject>
22:53:45.126 INFO: 3: parseParameter(aka parseParameter) [0x13dd2f07ca39] [/opt/nodejs/lib/node_modules/typescript/lib/typescript.js:~19244] [pc=0x32eddbbd155e](this=0x168ae12826f1 <undefined>)
22:53:45.126 INFO: 4: parseDelimitedList(aka parseDelimitedList) [0x13dd2f07c339] [/opt/nodejs/lib/node_modules/typ...
22:53:45.126 INFO:
22:53:45.126 ERROR: 1: 0x8fa0c0 node::Abort() [node]
22:53:45.127 ERROR: 2: 0x8fa10c [node]
22:53:45.127 ERROR: 3: 0xb0026e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
22:53:45.128 ERROR: 4: 0xb004a4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
22:53:45.128 ERROR: 5: 0xef49b2 [node]
22:53:45.129 ERROR: 6: 0xef4ab8 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
22:53:45.130 ERROR: 7: 0xf00b92 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
22:53:45.130 ERROR: 8: 0xf014c4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
22:53:45.131 ERROR: 9: 0xf04131 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
22:53:45.131 ERROR: 10: 0xecd5b4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
22:53:45.132 ERROR: 11: 0x116d73e v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
22:53:45.132 ERROR: 12: 0x32eddb85be1d
22:53:55.760 INFO: 49/256 files analyzed, current file: core/src/core/ui/CTADropdown.tsx
22:53:56.804 ERROR: Failed to get response while analyzing core/src/core/ui/CTADropdown.tsx
java.net.ConnectException: Failed to connect to localhost/127.0.0.1:46852
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:249)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:167)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
This error happens further down the pipeline during the analysis so I think we are making some progress. It seems node is running out of memory now. Can you check if adding one more variable in variables section of gitlab-ci.yml helps with this?