WARN es[][o.e.e.NodeEnvironment] lock assertion failed

The Elastic Search component will fill up the log file volume very rapidly with messages like below at seemingly random intervals. I have not been able to isolate the occurrence to a specific action, but once the product starts writing these messages it will require stopping SonarQube, cleaning up the massive log files, and then restart the service which then runs fine for some number of days or weeks.

Has anyone else seen this behavior?

Template for a good bug report, formatted with Markdown:

  • versions used (SonarQube, Scanner, Plugin, and any relevant extension)
    SonarQube 7.9.1 LTS hosted on CentOS 7.5, but I have seen this SonarQube 7.3 and other 7.x community versions.

  • error observed (wrap logs/code around triple quote ``` for proper formatting)

2019.10.13 18:03:57 WARN  es[][o.e.e.NodeEnvironment] lock assertion failed
java.nio.file.NoSuchFileException: /var/tmp/sonarqube-7.9.1/data/es6/nodes/0/node.lock
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
	at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:145) ~[?:?]
	at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) ~[?:?]
	at java.nio.file.Files.readAttributes(Files.java:1763) ~[?:?]
	at org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:189) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.elasticsearch.env.NodeEnvironment.assertEnvIsLocked(NodeEnvironment.java:1022) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.env.NodeEnvironment.nodePaths(NodeEnvironment.java:802) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsProbe.stats(FsProbe.java:58) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsService.stats(FsService.java:66) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsService.access$300(FsService.java:36) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsService$FsInfoCache.refresh(FsService.java:84) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsService$FsInfoCache.refresh(FsService.java:73) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.monitor.fs.FsService.stats(FsService.java:61) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.node.NodeService.stats(NodeService.java:114) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:74) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:39) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:138) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:259) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:255) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:692) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-6.8.0.jar:6.8.0]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.0.jar:6.8.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
  • steps to reproduce

I have not found a specific action that causes this to occur.

  • potential workaround

Hello,

What is the storage and filesystem where the data of SQ are stored? Could the data directory be stored with another program?

Apparently, a lock file used by Elasticsearch is being deleted. It’s quite unlikely SonarQube nor Elasticsearch deleted it.