I searched online and found an article explaining how to increase the memory. I tried that by adding ‘size: 2x’ to the relevant Bitbucket Pipeline step as well as: SONAR_SCANNER_OPTS: -Xms6G -Xmx6G in order to increase the max memory.
Unfortunately it failed with the following error, this time:
The pipe ran for 22 minutes before it threw this error. Needless to say 22 minutes in itself is very excessive for this step (you pay for build minutes, so every minute counts!). We have enabled debugging if anyone is interested in seeing the full raw output?
Thanks for sharing the log. I’ll be honest and say I’ve never seen this type of failure before in a SonarQube analysis. It looks like it’s right in the middle of processing the Java files & that’s just not a place for an unexpected EOF to be generated by the analysis itself.
So I searched on the error. This seems to be coming from the container, and it seems to be about the resources available to it. I think this thread should help:
we have indeed bump the docker memory to 2GB. What we’re testing now is the use of SonarScanner for Gradle instead of the sonarcloud-scan pipe. We will continue to test it. It seems like this happens when we scan ‘specific’ source files which SonarCloud doesn’t like. If we exclude it in sonar-project.properties, then it works. As I’ve mentioned we’re now testing SonarScanner for Gradle instead as an alternative way to scan builds…