Elastic log eatup storage

Hi,

I recently upgrade sonarqube from 6.7 to 7.9.1. i check the es log has eat all storage with this error:

2019.12.09 11:16:16 WARN es[][o.e.c.s.ClusterApplierService] failed to notify ClusterStateListener
org.apache.lucene.store.AlreadyClosedException: Underlying file changed by an external force at 2019-12-07T10:54:47Z, (lock=NativeFSLock(path=/apps/sonarqube/data/es6/nodes/0/node.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],creationTime=2019-12-07T10:54:47.458359Z))
    at org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:191) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
    at org.elasticsearch.env.NodeEnvironment.assertEnvIsLocked(NodeEnvironment.java:1022) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.env.NodeEnvironment.availableIndexFolders(NodeEnvironment.java:864) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.gateway.MetaStateService.loadIndicesStates(MetaStateService.java:89) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.gateway.DanglingIndicesState.findNewDanglingIndices(DanglingIndicesState.java:137) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.gateway.DanglingIndicesState.findNewAndAddDanglingIndices(DanglingIndicesState.java:122) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.gateway.DanglingIndicesState.processDanglingIndices(DanglingIndicesState.java:87) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.gateway.DanglingIndicesState.clusterChanged(DanglingIndicesState.java:191) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateListeners$7(ClusterApplierService.java:495) [elasticsearch-6.8.0.jar:6.8.0]
    at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) [?:?]
    at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) [?:?]
    at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) [?:?]
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateListeners(ClusterApplierService.java:492) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:475) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:419) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:163) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.8.0.jar:6.8.0]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
2019.12.09 11:16:16 WARN es[][o.e.g.G.InternalPrimaryShardAllocator] [issues][2]: failed to list shard for shard_started on node [Zlg-KG-MTH2LWfXnfH7WnQ]
org.elasticsearch.action.FailedNodeException: Failed node [Zlg-KG-MTH2LWfXnfH7WnQ]

can i disable the es ?

Hi,

Not without disabling SonarQube.

Instead of turning off ES you need to address the underlying problem. What process is changing these files and how do you stop it? Are you running some sort of virus scan on that server? If so, you need to configure it to leave SonarQube’s files alone.

 
Ann

Hi Ann,

Thanks for your information. i restart the sonar service, now the elastic is running well