`RuntimeException: Could not set coredump filter` when deploying Community Build on GCP

Version (SonarQube community): sonarqube:latest (sha256:c54340ac8b420d94dec4fb45c6d8d2cbabde4e7b6fc7dbc9314d5422951ce6ab)

Deployment: Docker

I am trying to deploy the sonarqube image (with embedded db) onto Google Cloud Platform Cloud Run for some quick testing with my peers. I already ran the steps on the “Try Out” page and it works fine on my local Docker as expected, so now I’m just trying to deploy the exact same image on Cloud Run so my peers can see the same dashboard I can.

When I set up a new Cloud Run Service and provide the image tag, it throws this error on boot:

java.lang.RuntimeException: Could not set coredump filter
	at org.elasticsearch.bootstrap.Elasticsearch.setCoredumpFilter(Elasticsearch.java:612) ~[elasticsearch-8.19.8.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initializeNatives(Elasticsearch.java:485) ~[elasticsearch-8.19.8.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase2(Elasticsearch.java:186) ~[elasticsearch-8.19.8.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:99) ~[elasticsearch-8.19.8.jar:?]
Caused by: java.nio.file.AccessDeniedException: /proc/self/coredump_filter
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:261) ~[?:?]
	at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:482) ~[?:?]
	at java.nio.file.Files.newOutputStream(Files.java:228) ~[?:?]
	at java.nio.file.Files.write(Files.java:3505) ~[?:?]
	at java.nio.file.Files.writeString(Files.java:3727) ~[?:?]
	at java.nio.file.Files.writeString(Files.java:3667) ~[?:?]
	at org.elasticsearch.bootstrap.Elasticsearch.setCoredumpFilter(Elasticsearch.java:610) ~[elasticsearch-8.19.8.jar:?]
	... 3 more"

When googling it, I got this response:

The java.nio.file.AccessDeniedException: /proc/self/coredump_filter error in SonarQube on Cloud Run typically occurs because the container attempts to configure system-level core dumps, which is prohibited in serverless environments. To resolve this, disable Elasticsearch’s attempt to access or modify this file by setting the environment variable ES_JAVA_OPTS to exclude core dump settings, allowing SonarQube to start.

Key Fix: Disable Core Dump Configuration

Add the following environment variable to your Cloud Run service configuration:

  • Key: ES_JAVA_OPTS

  • Value: -Delasticsearch.upload-files=false (or try adding -Djava.io.tmpdir=/tmp to ensure a writable directory).

Why this happens on Cloud Run

  • Security Restrictions: Cloud Run limits access to kernel/system files like /proc/self/.

  • Elasticsearch (ES): SonarQube embeds ES, which tries to configure file handlers for debugging (coredump) upon startup.

  • Permission Denied: Because the container runs as a non-root user and lacks elevated privileges, it cannot write to /proc/self/coredump_filter.

Other Troubleshooting Steps

  • Check User Permissions: Ensure the Docker image is not trying to write to read-only directories.

  • Run with Debug Logging: Re-run with -X to confirm if it’s a file permission issue elsewhere.

  • Increase Resources: While not directly causing this error, inadequate memory can cause weird startup failures in SonarQube.

I tried updating my Cloud Run Service with the ES_JAVA_OPTS key it mentioned, but I’m still getting the same error and it’s not booting. I’m confused what else I can do to stop this ElasticSearch error from killing the container on the GCP?

1 Like

Update: I got it working by adding the env variable JAVA_OPTS=-Delasticsearch.upload-files=false -Djava.io.tmpdir=/tmp to my GCP service. I think this disables the coredump call by elasticsearch