zacktzeng
(Zack Tzeng)
June 2, 2022, 5:05pm
1
Hi I am trying to deploy Sonarqube Enterprise edition to GCP with helm chart and Terraform.
Here is the Terraform code:
resource "helm_release" "test" {
name = "enterprise"
chart = "sonarqube/sonarqube"
namespace = kubernetes_namespace.sonar.id
timeout = "1200"
set {
name = "jvmOpts"
value = "-Xmx1024m -Xms1024m -XX:+HeapDumpOnOutOfMemoryError"
}
set {
name = "edition"
value = "enterprise"
}
set {
name = "jvmCeOpts"
value = "-Xmx1024m -Xms1024m -XX:+HeapDumpOnOutOfMemoryError"
}
set {
name = "service.type"
value = "LoadBalancer"
}
}
This Terraform was able to deploy the Developer edition on GCP. I modified the “edition” field to be enterprise. However, the deployment never finished. GCP kept complaining that “containers with unready status: [sonarqube]”.
Is there anything else I’m missing? Can someone help me out here?
Looking at the GCP log the same thing seemed to occur over and over again.
This is what I see in deployment log:
Hello @zacktzeng
Could you check pod/container status to determine what is preventing the pod to be ready?
kubectl describe sonarqube
might give you some information
zacktzeng
(Zack Tzeng)
June 7, 2022, 1:13pm
3
I ran kubectl describe pods enterprise-sonarqube-0 --namespace sonarqube-external-db
and this is the output:
zack_tzeng@cloudshell:~$ kubectl describe pods enterprise-sonarqube-0 --namespace sonarqube-external-db
W0607 13:11:31.050907 810 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Name: enterprise-sonarqube-0
Namespace: sonarqube-external-db
Priority: 0
Node: gke-italpha-pool-3-749753ca-6v61/10.142.15.245
Start Time: Tue, 07 Jun 2022 13:09:48 +0000
Labels: app=sonarqube
controller-revision-hash=enterprise-sonarqube-6bcc796fb9
release=enterprise
statefulset.kubernetes.io/pod-name=enterprise-sonarqube-0
Annotations: checksum/config: 1ee0aac9566179202318baf7fee0372e70e8a1d9bcc5c57707abdb7cabb8d0af
checksum/init-fs: 885630ac8f5a85e0d7e56840d075108ae307f8dd90c832232cb1d0a014892bbc
checksum/init-sysctl: dd5797be7e18f04219e52ea1d3dbdc816aa5b19aeeeb589e823be5fc33c253e3
checksum/plugins: b27cb4a139982def337ff8dc7cab5097bb100acf0e16ecdd80e8718d8a3e1715
checksum/prometheus-ce-config: 2c5c4634b1d7f8ff84d8714326c13b0f1a70d5245ab55171ac17f02f1e616c1f
checksum/prometheus-config: 43c7dbc57aa26e640703962848333c448b799d3851ff69495b0bd3db9f3adfc3
checksum/secret: 03832a46e2fe9dac7ce07390ee17fd28fd941f7b5b944ecad2eebbaa51663cd8
Status: Running
IP: 10.68.12.59
IPs:
IP: 10.68.12.59
Controlled By: StatefulSet/enterprise-sonarqube
Init Containers:
init-sysctl:
Container ID: containerd://d488b817a2735ea6868b9f0793029c2cbf674ae1605db0f686125451e36a7104
Image: busybox:1.32
Image ID: docker.io/library/busybox@sha256:ae39a6f5c07297d7ab64dbd4f82c77c874cc6a94cea29fdec309d0992574b4f7
Port: <none>
Host Port: <none>
Command:
sh
-e
/tmp/scripts/init_sysctl.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 07 Jun 2022 13:09:48 +0000
Finished: Tue, 07 Jun 2022 13:09:49 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/tmp/scripts/ from init-sysctl (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t889w (ro)
inject-prometheus-exporter:
Container ID: containerd://f6218f7a61500fbef8b24984ee39625823366b74ec42849ca64368c4ee25aa17
Image: curlimages/curl:7.76.1
Image ID: docker.io/curlimages/curl@sha256:fa32ef426092b88ee0b569d6f81ab0203ee527692a94ec2e6ceb2fd0b6b2755c
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
curl -s 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.0/jmx_prometheus_javaagent-0.16.0.jar' --output /data/jmx_prometheus_javaagent.jar -v
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 07 Jun 2022 13:09:49 +0000
Finished: Tue, 07 Jun 2022 13:09:50 +0000
Ready: True
Restart Count: 0
Environment:
http_proxy:
https_proxy:
no_proxy:
Mounts:
/data from sonarqube (rw,path="data")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t889w (ro)
Containers:
sonarqube:
Container ID: containerd://73beab008070a0ebf0d0a111277c93f9ba295e7051e20523380846efe6354e9b
Image: sonarqube:9.4.0-enterprise
Image ID: docker.io/library/sonarqube@sha256:48f73a1564cd9b3a9864830730cd16e7cf7db4ff6ad4f24a3d270d8c509d1aad
Ports: 9000/TCP, 8000/TCP, 8001/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Tue, 07 Jun 2022 13:09:50 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 800m
memory: 4Gi
Requests:
cpu: 400m
memory: 2Gi
Liveness: http-get http://:http/api/system/liveness delay=60s timeout=1s period=30s #success=1 #failure=6
Readiness: exec [sh -c #!/bin/bash
# A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
# status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
host="$(hostname -i || echo '127.0.0.1')"
if wget --proxy off -qO- http://${host}:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
exit 0
fi
exit 1
] delay=60s timeout=1s period=30s #success=1 #failure=6
Startup: http-get http://:http/api/system/status delay=30s timeout=1s period=10s #success=1 #failure=24
Environment Variables from:
enterprise-sonarqube-jdbc-config ConfigMap Optional: false
Environment:
SONAR_WEB_JAVAOPTS: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml
SONAR_CE_JAVAOPTS: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml
SONAR_JDBC_PASSWORD: <set to the key 'jdbc-password' in secret 'enterprise-sonarqube'> Optional: false
SONAR_WEB_SYSTEMPASSCODE: <set to the key 'SONAR_WEB_SYSTEMPASSCODE' in secret 'enterprise-sonarqube-monitoring-passcode'> Optional: false
Mounts:
/opt/sonarqube/conf/prometheus-ce-config.yaml from prometheus-ce-config (rw,path="prometheus-ce-config.yaml")
/opt/sonarqube/conf/prometheus-config.yaml from prometheus-config (rw,path="prometheus-config.yaml")
/opt/sonarqube/data from sonarqube (rw,path="data")
/opt/sonarqube/logs from sonarqube (rw,path="logs")
/opt/sonarqube/temp from sonarqube (rw,path="temp")
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t889w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
init-sysctl:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enterprise-sonarqube-init-sysctl
Optional: false
init-fs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enterprise-sonarqube-init-fs
Optional: false
install-plugins:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enterprise-sonarqube-install-plugins
Optional: false
prometheus-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enterprise-sonarqube-prometheus-config
Optional: false
prometheus-ce-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enterprise-sonarqube-prometheus-ce-config
Optional: false
sonarqube:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-t889w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 103s default-scheduler Successfully assigned sonarqube-external-db/enterprise-sonarqube-0 to gke-italpha-pool-3-749753ca-6v61
Normal Pulled 103s kubelet Container image "busybox:1.32" already present on machine
Normal Created 103s kubelet Created container init-sysctl
Normal Started 103s kubelet Started container init-sysctl
Normal Pulled 102s kubelet Container image "curlimages/curl:7.76.1" already present on machine
Normal Created 102s kubelet Created container inject-prometheus-exporter
Normal Started 102s kubelet Started container inject-prometheus-exporter
Normal Pulled 101s kubelet Container image "sonarqube:9.4.0-enterprise" already present on machine
Normal Created 101s kubelet Created container sonarqube
Normal Started 101s kubelet Started container sonarqube
Warning Unhealthy 63s kubelet Startup probe failed: Get "http://10.68.12.59:9000/api/system/status": dial tcp 10.68.12.59:9000: connect: connection refused
Warning Unhealthy 42s (x2 over 52s) kubelet Startup probe failed: Get "http://10.68.12.59:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 13s (x2 over 33s) kubelet Readiness probe failed:
zack_tzeng@cloudshell:~$
zacktzeng
(Zack Tzeng)
June 7, 2022, 1:19pm
4
@leo.geoffroy this is my terraform script:
provider "google" {
project = "italpha"
region = "us-east1"
zone = "us-east1-b"
}
data "google_container_cluster" "service" {
name = "italpha"
}
data "google_client_config" "gke_client_config" {}
provider "kubernetes" {
host = "https://${data.google_container_cluster.service.endpoint}"
token = data.google_client_config.gke_client_config.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.service.master_auth[0].cluster_ca_certificate)
}
provider "helm" {
kubernetes {
host = "https://${data.google_container_cluster.service.endpoint}"
token = data.google_client_config.gke_client_config.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.service.master_auth[0].cluster_ca_certificate)
}
}
resource "kubernetes_namespace" "sonar" {
metadata {
name = "sonarqube-external-db"
}
}
resource "helm_release" "sonar" {
name = "enterprise"
chart = "sonarqube/sonarqube"
namespace = kubernetes_namespace.sonar.id
timeout = "360"
set {
name = "edition"
value = "enterprise"
}
set {
name = "service.type"
value = "LoadBalancer"
}
set {
name = "jdbcOverwrite.jdbcUrl"
value = "jdbc:postgresql://<cloud sql instance private IP>:5432/sonar"
}
set {
name = "jdbcOverwrite.enable"
value = "true"
}
set {
name = "postgresql.enabled"
value = "false"
}
set {
name = "jdbcOverwrite.jdbcUsername"
value = "<db username>"
}
set {
name = "jdbcOverwrite.jdbcPassword"
value = "<db password>"
}
}
zacktzeng
(Zack Tzeng)
June 7, 2022, 1:30pm
5
@leo.geoffroy both developer edition and enterprise edition encountered startup probe fail error. However, The developer edition was successfully deployed but enterprise edition failed. Was there any probe success requirement by default for the enterprise edition?
zacktzeng
(Zack Tzeng)
June 7, 2022, 1:36pm
6
And this is the output of describing the developer edition pod:
zack_tzeng@cloudshell:~ (italpha)$ kubectl describe pods developer-sonarqube-0 --namespace sonarqube-external-db
W0607 13:28:01.978800 931 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Name: developer-sonarqube-0
Namespace: sonarqube-external-db
Priority: 0
Node: gke-italpha-pool-2-2ed789a4-f6x9/10.142.0.18
Start Time: Tue, 07 Jun 2022 13:20:45 +0000
Labels: app=sonarqube
controller-revision-hash=developer-sonarqube-6f5fc98694
release=developer
statefulset.kubernetes.io/pod-name=developer-sonarqube-0
Annotations: checksum/config: ac837503d4925c6f8b194170b6f1d4eaf988e7599e59d00ee7ea4bfeccf6e72c
checksum/init-fs: 94b53939c261bbe9ec91bf72223db6a713db4bbd9a6422080d4c6ca75e219c6d
checksum/init-sysctl: 6d94f1740c4bb17570034ee9411d0066078e83f5a4942b3b6e6cdec3e26f151d
checksum/plugins: cf6d00c7c4f7f8c2fc69b69e5da06fb5286a0ecdcfd85cf9440ac9485accdfa0
checksum/prometheus-ce-config: 39d65d5c3a99f68a0a8b7e1f6cccd7dbead18e2dc0dbc9322f435cf822bdad75
checksum/prometheus-config: f8a25d044ca3488d8c1df768685003d8c3bbee0435f73995cae00131b7c8f4bb
checksum/secret: d59d049908b0012e9c91b037719afc9c4ff7760d6110a8a81bba9e9949efac0e
Status: Running
IP: 10.68.7.54
IPs:
IP: 10.68.7.54
Controlled By: StatefulSet/developer-sonarqube
Init Containers:
init-sysctl:
Container ID: containerd://e8e89c4994946a7b86088b75aa5ba8b3f0f74b013c5a237c69ac9941fa46e41b
Image: busybox:1.32
Image ID: docker.io/library/busybox@sha256:ae39a6f5c07297d7ab64dbd4f82c77c874cc6a94cea29fdec309d0992574b4f7
Port: <none>
Host Port: <none>
Command:
sh
-e
/tmp/scripts/init_sysctl.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 07 Jun 2022 13:20:47 +0000
Finished: Tue, 07 Jun 2022 13:20:47 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/tmp/scripts/ from init-sysctl (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qrhr (ro)
inject-prometheus-exporter:
Container ID: containerd://03b5f8f92bbde73bff17dc115bbca7bc188ffba2042a1bdf51da809ca38c5477
Image: curlimages/curl:7.76.1
Image ID: docker.io/curlimages/curl@sha256:fa32ef426092b88ee0b569d6f81ab0203ee527692a94ec2e6ceb2fd0b6b2755c
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
curl -s 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.0/jmx_prometheus_javaagent-0.16.0.jar' --output /data/jmx_prometheus_javaagent.jar -v
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 07 Jun 2022 13:20:48 +0000
Finished: Tue, 07 Jun 2022 13:20:49 +0000
Ready: True
Restart Count: 0
Environment:
http_proxy:
https_proxy:
no_proxy:
Mounts:
/data from sonarqube (rw,path="data")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qrhr (ro)
Containers:
sonarqube:
Container ID: containerd://9ae6cdbf879b25b6b13a651161fc79ad3d8dbc33056153052795b10cae6939ee
Image: sonarqube:9.4.0-developer
Image ID: docker.io/library/sonarqube@sha256:a0822cca9a57cbfc93a2d4396691a8a53c490f12f03bae4c39eb68736c95396f
Ports: 9000/TCP, 8000/TCP, 8001/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Tue, 07 Jun 2022 13:21:13 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 800m
memory: 4Gi
Requests:
cpu: 400m
memory: 2Gi
Liveness: http-get http://:http/api/system/liveness delay=60s timeout=1s period=30s #success=1 #failure=6
Readiness: exec [sh -c #!/bin/bash
# A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING
# status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.
host="$(hostname -i || echo '127.0.0.1')"
if wget --proxy off -qO- http://${host}:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then
exit 0
fi
exit 1
] delay=60s timeout=1s period=30s #success=1 #failure=6
Startup: http-get http://:http/api/system/status delay=30s timeout=1s period=10s #success=1 #failure=24
Environment Variables from:
developer-sonarqube-jdbc-config ConfigMap Optional: false
Environment:
SONAR_WEB_JAVAOPTS: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml
SONAR_CE_JAVAOPTS: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml
SONAR_JDBC_PASSWORD: <set to the key 'jdbc-password' in secret 'developer-sonarqube'> Optional: false
SONAR_WEB_SYSTEMPASSCODE: <set to the key 'SONAR_WEB_SYSTEMPASSCODE' in secret 'developer-sonarqube-monitoring-passcode'> Optional: false
Mounts:
/opt/sonarqube/conf/prometheus-ce-config.yaml from prometheus-ce-config (rw,path="prometheus-ce-config.yaml")
/opt/sonarqube/conf/prometheus-config.yaml from prometheus-config (rw,path="prometheus-config.yaml")
/opt/sonarqube/data from sonarqube (rw,path="data")
/opt/sonarqube/logs from sonarqube (rw,path="logs")
/opt/sonarqube/temp from sonarqube (rw,path="temp")
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qrhr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
init-sysctl:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: developer-sonarqube-init-sysctl
Optional: false
init-fs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: developer-sonarqube-init-fs
Optional: false
install-plugins:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: developer-sonarqube-install-plugins
Optional: false
prometheus-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: developer-sonarqube-prometheus-config
Optional: false
prometheus-ce-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: developer-sonarqube-prometheus-ce-config
Optional: false
sonarqube:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-7qrhr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m17s default-scheduler Successfully assigned sonarqube-external-db/developer-sonarqube-0 to gke-italpha-pool-2-2ed789a4-f6x9
Normal Pulled 7m16s kubelet Container image "busybox:1.32" already present on machine
Normal Created 7m16s kubelet Created container init-sysctl
Normal Started 7m15s kubelet Started container init-sysctl
Normal Pulled 7m14s kubelet Container image "curlimages/curl:7.76.1" already present on machine
Normal Created 7m14s kubelet Created container inject-prometheus-exporter
Normal Started 7m14s kubelet Started container inject-prometheus-exporter
Normal Pulling 7m12s kubelet Pulling image "sonarqube:9.4.0-developer"
Normal Pulled 6m49s kubelet Successfully pulled image "sonarqube:9.4.0-developer" in 22.741765953s
Normal Created 6m49s kubelet Created container sonarqube
Normal Started 6m49s kubelet Started container sonarqube
Warning Unhealthy 5m57s (x3 over 6m17s) kubelet Startup probe failed: Get "http://10.68.7.54:9000/api/system/status": dial tcp 10.68.7.54:9000: connect: connection refused
Warning Unhealthy 5m26s (x3 over 5m46s) kubelet Startup probe failed: Get "http://10.68.7.54:9000/api/system/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 3m47s (x5 over 5m16s) kubelet Readiness probe failed:
Warning Unhealthy 3m47s kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
zack_tzeng@cloudshell:~ (italpha)$
Normally there are no differences between enterprise and developer edition regarding the configuration.
That’s strange because the readiness probe shows “#success=1 #failure=6”, but still the ready state is to false
You can try to extend the period an initial Delay for the readinessProbe to check if this has any effect
(via readinessProbe.periodsSeconds, and readinessProbe.initialDelaySeconds)
Can you also notice a difference in the logs at startup?
If you are reusing the same database for the two editions, you may also try to use/recreate the database when you switch edition
zacktzeng
(Zack Tzeng)
June 7, 2022, 5:01pm
8
Here are the csv files containing pod logs I downloaded from GKE. I wasn’t sure what to look for from these entries to be honest. I increased the readiness probe values but didn’t do much. I created two separate database. Hopefully that removes some uncertainties.
pod-log.zip (52.3 KB)
zacktzeng
(Zack Tzeng)
June 7, 2022, 7:56pm
9
It looks like the elasticsearch failed to launch. I got a log entry like this
INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
from the container log.
I am using only the helm chart and not an external database.
Hello @zacktzeng
I have tried to deploy on GKE using your settings
Initially, I had the a problem very similar to yours, where the pod kept restarting with no clear logs on why it has been shutdown
I realized that some memory pressure and eviction was happening in my cluster due to insufficient memory resources allocated to the nodes (e2-medium)
As soon as I upgraded to a e2-standard-2 node configuration, I was able to deploy the enterprise edition
I suspect that you have the same problem on your side, and the developer edition deploys fine because it embeds fewer features than the enterprise edition, therefore uses less memory.
Can you try to upgrade your node size and check if it solves your problem?
I also recommend to try without the jvmCeOpts and jvmOpts options
1 Like
zacktzeng
(Zack Tzeng)
June 8, 2022, 3:12pm
13
The deployment is now successful. Thank you so much for your help @leo.geoffroy
It was exactly like you said. Once I changed the node to E2-standard the deployment had no issue.
1 Like
system
(system)
Closed
June 15, 2022, 4:47pm
17
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.