Hello how are you?
Currently, I am using two versions of SonarQube:
Development environment: Community Edition Version 9.1 (build 47736)
Production environment: Developer Edition Version 9.9.1 (build 69595)
In both cases, I need to modify the SonarQube watermark. By default, these values are set to around 85-90% of the disk size, but I need to reduce the disk space even more without affecting the performance of the Sonar analysis.
To achieve this, I have performed tests that involve editing the elasticsearch.yml file and the properties.sonar file, as well as disabling the threshold_enabled option. For example:
In the sonar.properties file:
yaml
Copy code
cluster.routing.allocation.disk.threshold_enabled: false
cluster.routing.allocation.disk.watermark.low: 5%
cluster.routing.allocation.disk.watermark.high: 5%
cluster.routing.allocation.disk.watermark.flood_stage: 10%
I have also consulted the following forums for additional information:
opened 09:01AM - 01 May 20 UTC
closed 05:20PM - 06 May 20 UTC
I am raising this issue here as it is specific to the way Azure App Service allo… cates disk space on the D drive and D:/home folder. I have not been able to work around this and there are 1 or 2 posts online with the same issue with no robust resolution.
`WARN es[][o.e.c.r.a.DiskThresholdMonitor] flood stage disk watermark [95%] exceeded on [0WrMWG4XSEObU4chJQUUvg][sonarqube][D:\home\site\wwwroot\sonarqube-7.9\data\es6\nodes\0] free: 1.2gb[3.9%], all indices on this node will be marked read-only`
[https://github.com/elastic/elasticsearch/issues/53233](https://github.com/elastic/elasticsearch/issues/53233)
[https://github.com/vanderby/SonarQube-AzureAppService/issues/35](https://github.com/vanderby/SonarQube-AzureAppService/issues/35)
The above post mentioned scaling up the App Service Plan but this doesn't work if your index is approaching 30GB and is marked as Closed.
The Elasticsearch index in SonarQube is set to a 95% disk threshold meaning that when used disk space is going to exceed 95% then Elasticsearch writes a warning in the es.log file and tries to write to another node. However there are no other nodes so the indexing enters an infinite loop. In Elasticsearch you can tweak these settings in the elasticsearch.yml file, e.g. you can turn this check off. However SonarQube seems to ignore this yml file and creates a temporary one on startup in a temp folder. I tried adding `cluster.routing.allocation.disk.threshold_enabled: false` to the yml file which was ignored on App Service restart. I did this to prevent Elasticsearch from checking disk space as I have allocated way more than enough in the App Service Plan.
The more canny might state well just add more disk space (if only it was that straight forward). Scaling up an App Service Plan adds more disk space to D:/home but not D:/ which seems to be the OS disk and is fixed around 32GB. D:/home is what looks like a symbol link or share which is actually a dedicated disk from the App Service Plan. I have the plan at 250GB which I can see in Kudu (Azure environment tool). It looks like Elasticsearch is using the free disk space on D: and not the folder where the index is stored e.g. D:/home/site/wwwroot/.../data.
I suspect this issue cannot be solved in Azure, SonarQube doesn't honour Elasticsearch settings and Elasticsearch just uses the Java libraries to check free disk space. What I am trying to say is I don't think any of the 3 would take ownership and solve this problem, rendering the App Service deployment only fit for small deployments with small indexes. This is a shame as in my opinion this deployment is the most manageable and robust option.
If anybody is able to tell me otherwise I would be delighted to see a resolution. Thank you.
elasticsearch
I thank you in advance for your attention and I hope I have clearly explained my query.
Kind regards.
ganncamp
(G Ann Campbell)
October 18, 2023, 1:43pm
2
Hi,
Welcome to the community!
Sorry, but this just isn’t up for grabs, and we can’t / don’t support modifying the embedded Elasticsearch’s settings.
Ann