We are running SonarQube lts version 6.7.1-alpine in k8s cluster(deployed as k8s deployment object with replica count 1) and sonarqube data/extensions dirs are backed by k8s persistent volumes to have persistency for these two them. But we are facing issue while doing the k8s rolling deployments, where existing sonar pod is still running and a parallel gets started which gets failed to start, as elasticsearch does not get the node lock(already taken by existing as both are using the same es data path) and it fails, hence rolling deployment fails.
However this can be handled at k8s side, if deployment strategy is defined as Recreate where first it deletes the existing and create/run new, but this cause unavailability of sonarqube application during recurring deployment
So want to know if this can be handled at sonar side as well in some way, i read about node.max_local_storage_nodes can be defined for elasticsearch can help us solve this problem, but unfortunately did get any way to define same in sonar to be passed to elasticsearch. Please let me know we can pass any custom elasticsearch configuration parameters OR use our own elasticsearch.yml file instead of default (tenp/conf/es/elasticsearch.yml), if YES what is appropriate way.