Support nodeSelector, tolerations and affinity separately for app/search in DCE Helm chart

Description

Currently the nodeSelector, tolerations and affinity configurations are top-level, and thus, shared between the app Deployment and search nodes StatefulSet. It should become possible to specify these keys separately for application and search nodes.

Purpose

I’d like to deploy the search nodes on a dedicated nodepool, limiting the noise/impact from other services running on the cluster. I’d taint the nodepool nodes and make the search StatefulSet tolerate it, additionally specifying only that nodepool to be used as a target for pod assignment by using nodeSelector or nodeAffinity.

With the current implementation this is not possible.

1 Like

Hi and welcome to the community!

Thanks for sharing.
I’d be interested to better understand the interest you see in deploying the SonarQube nodes on a different nodepool. Can you please tell me more about how it would help in the case of your setup?
Do you plan to have pools with different characteristics for the search and app nodes?

Chris

Hey Chris,

We’re preparing for higher loads currently with our DCE installation. The chart is missing an HPA for the app Deployment, we’ve thrown in one ourselves. We’re assuming with higher loads autoscalers will kick in and the ratio of numbers of app vs search pods will rise also (with the latter one being fixed to 3).

Due to this assumption we’d want to ensure we don’t run into either of these two scenarios:

  • high loads (bursts?) of the app service might impact the search nodes performance
  • search nodes failed scheduling e.g. due to lack of resources on a node (e.g. due to multiple app pods being spawned and allocating up the allocatable resources) - for whatever reason the search node might have been deleted or re-scheduled. We want to scale steadily and raise the nodepools sizes as we go, to better understand how the system behaves, rather than allowing for e.g. scaling to dozens of Kubernetes nodes from the start.

It may be a lack of understanding on how the web and search nodes work together to fulfill users’ requests. Maybe the effort is not justified (in terms of the first case), but we still have no other idea on how to prevent the second (priorityClasses are also not supported for the search nodes).

Aside from the above: while we acknowledge Kubernetes was meant (probably?) to alleviate (and/or make void) such issues as described above - we just prefer to have specific nodepools serve specific purposes (call it e.g. tidy infra fragmentation).

1 Like