We are running SonarQube lts version 6.7.1-alpine in k8s cluster(deployed as k8s deployment object with replica count 1) and sonarqube data/extensions dirs are backed by k8s persistent volumes to have persistency for these two them. But we are facing issue while doing the k8s rolling deployments, where existing sonar pod is still running and a parallel gets started which gets failed to start, as elasticsearch does not get the node lock(already taken by existing as both are using the same es data path) and it fails, hence rolling deployment fails.
However this can be handled at k8s side, if deployment strategy is defined as Recreate where first it deletes the existing and create/run new, but this cause unavailability of sonarqube application during recurring deployment
So want to know if this can be handled at sonar side as well in some way, i read about node.max_local_storage_nodes can be defined for elasticsearch can help us solve this problem, but unfortunately did get any way to define same in sonar to be passed to elasticsearch. Please let me know we can pass any custom elasticsearch configuration parameters OR use our own elasticsearch.yml file instead of default (tenp/conf/es/elasticsearch.yml), if YES what is appropriate way.
What are you trying to accomplish with your k8s cluster? If it’s high availability, I have to point out that that’s only supported through Data Center Edition($). If it’s horizontal scalability, well I have to point out that we don’t support that at all right now.
Thanks for the update, i am simply trying to host the sonarqube (community edition, in standalone mode) in k8s cluster with persistency for it’s data and extensions dirs. Its deployed as k8s deployment.
Tell me one thing, elasticsearch can be configured with custom/additional parameters(in this case node.max_local_storage_nodes es parameter in yml) like we can define jvm and additionalopts in sonar.properties file, i tried to define ${SONARQUBE_HOME}/elasticsearch/config/elasticsearch.yml, but those settings were not picked up(rather only default settings comes up in temp/conf/es/elasticsearch.yml file) during sonar(es) start. So i want to know if we can define elasticsearch supported parameters in elasticsearch.yml file OR if we can configure so that sonar refer our elasticsearch.yml file rather default one.
The data is persisted in the database. I guess “persisting” your plugins is a question of your Docker configuration. You can configure the ES storage path (details in the docs) but you will not be able to feed your own .yml file.
Thanks for your update, so far what i understood w.r.t. what and how to configure the ES in Sonar, please correct me if wrong on below:
What can be configured in Sonar ES OR even allowed:
ES javaopts
ES additionalopts
ES data and temp storage locations
Apart from above nothing else can be configured(any ES supported parameters or feeding own .yml file) or even allowed, as this is all driver by the application logic(generating elasticsarch.yml file at runtime)
@ganncamp I am facing the same issue where I am trying to update from community to developer edition. (just to mention I am just updating from 8.0-community to 8.0-developer-beta). The issue that I am facing is sonarqube helm chart for Kubernetes deployment is not able to handle the rolling update and when I try to deploy new POD with the updated image it still try to fetch the existing volume(pvc) see the error below.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 2m24s (x57 over 129m) kubelet, aks-basepool-xxxxxxx Unable to mount volumes for pod "sonarqube-sonarqube-xxxx(cd802a4d-1c02-xxxxx)": timeout expired waiting for volumes to attach or mount for pod "pod-x"/"sonarqube-sonarqube-xxxx". list of unmounted volumes=[sonarqube]. list of unattached volumes=[config install-plugins copy-plugins sonarqube tmp-dir default-token-ztvcd]
In the ideal case the new pvc should be launched where the configmaps could be used with the new volume just like the old one as that’s the expected behavior.