Sonarqube on EKS Fargate

Hi Community.

I have been trying to host / run elasticsearch on fargate but running into permission issues. the default init container definition looks osmething like this.

initContainers:
      - command:
        - /bin/bash
        - -e
        - /tmp/scripts/init_sysctl.sh
        image: sonarqube:10.4.1-community
        imagePullPolicy: IfNotPresent
        name: init-sysctl
        resources: {}
        securityContext:
          privileged: false
          runAsUser: 0
        volumeMounts:
        - mountPath: /tmp/scripts/
          name: init-sysctl

where the volume is

init_sysctl.sh: |-
    if [[ "$(sysctl -n vm.max_map_count)" -lt 524288 ]]; then
      sysctl -w vm.max_map_count=524288
    fi
    if [[ "$(sysctl -n fs.file-max)" -lt 131072 ]]; then
      sysctl -w fs.file-max=131072
    fi
    if [[ "$(ulimit -n)" != "unlimited" ]]; then
      if [[ "$(ulimit -n)" -lt 131072 ]]; then
        echo "ulimit -n 131072"
        ulimit -n -S 131072
      fi
    fi
    if [[ "$(ulimit -u)" != "unlimited" ]]; then
      if [[ "$(ulimit -u)" -lt 8192 ]]; then
        echo "ulimit -u 8192"
        ulimit -u -S 8192
      fi
    fi

but this throws an error on pod when you try to run this deployment.

Screenshot 2024-03-29 at 2.04.45 PM

if we remove the init container all together from the deployment then elasticsearch throws the following error.

[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch. For more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/bootstrap-checks.html]
bootstrap check failure [1] of [2]: max number of threads [1024] for user [sonarqube] is too low, increase to at least [4096]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/max-number-threads-check.html]
bootstrap check failure [2] of [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/_maximum_map_count_check.html]

I have tried fixing this by attached the following configmap at path /etc/security/limits.conf, but that had no impact.

*               hard    nproc            100000
    *               hard    nproc            8192
    *               hard    rss             10000
    *               soft    nofile          8192
    *               hard    nofile          8192

If we have a working chart that’s used for sonarqube-ce deployment on EKS Fargate then do share that as well.

Any help on the above issue will be great.

1 Like

Hi,

You’re trying to run Elasticsearch outside of SonarQube itself? We don’t support that.

 
Ann

Hi Ann.

No, i am trying to use the same configurations that are found in the community helm chart for sonar. The only thing that i have changed in this chart is the volumes type from RWO to RWX (EFS).

The issue arises when one of the init contains execute vm.max_map_count and file-max commands. Fargate by default does allow users to modify any of the underlying nodes properties.

Do let me know if there is way to host sonar on fargate without facing above issues.

Regards,
Ali

Hi Ali,

I assume you mean “doesn’t”?

What version are we talking about?

 
Thx,
Ann

Hi Ann.

Correct, that a typo from my end. (it doesn’t allow users to modify node properties)

Image Version used: sonarqube:10.4.1-community

Hi,

Thanks. I’ve flagged this for team attention.

 
Ann

Dear @muhammadali1233ify,

Thanks for trying out our helm chart!

but this throws an error on pod when you try to run this deployment

Can you attach the error you get to this post? I might be able to better replicate the issue and hopefully give you hints :slight_smile:

if we remove the init container all together from the deployment then elasticsearch throws the following error.

Unfortunately, removing the init container won’t be a solution. Setting that kernel parameter is required by one of our dependencies (Elasticsearch). If you intend to skip the container, then you would need act on the cluster nodes and set that parameter by yourself.