Sonarlint access project files during a build job

I am a developer of Bndtools. A customer sees messages that files are locked when we tried to write them during a build job. The build job is locking the workspace, however, looking at the code the AbstractSonarProjectJob.java is scheduled without rules. This will make it run in parallel. So when the compiler you use to create an AST opens a Jar, we might just try to write this file. Since you Sonarlint has the file open, we get an error on Windows. (Other operating systems do not lock these files.)

The following stacktrace was gathered with a file leak detector:

#239 D:\Eclipse\iguana\com.example\generated\com.example.application.api.jar by thread:Worker-10: SonarLint processing file /com.example.gms.provider/src/com.example/gms/admin/GmsAdminImpl.java on Thu Jul 23 18:44:52 CEST 2020
    at java.util.zip.ZipFile.<init>(ZipFile.java:156)
    at java.util.zip.ZipFile.<init>(ZipFile.java:169)
    at org.eclipse.jdt.internal.compiler.batch.ClasspathJar.initialize(ClasspathJar.java:199)
    at org.eclipse.jdt.internal.compiler.batch.FileSystem.<init>(FileSystem.java:222)
    at org.eclipse.jdt.internal.compiler.batch.FileSystem.<init>(FileSystem.java:264)
    at org.eclipse.jdt.core.dom.NameEnvironmentWithProgress.<init>(NameEnvironmentWithProgress.java:36)
    at org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:695)
    at org.eclipse.jdt.core.dom.ASTParser.internalCreateAST(ASTParser.java:1217)
    at org.eclipse.jdt.core.dom.ASTParser.createAST(ASTParser.java:832)
    at org.sonar.java.model.JParser.parse(JParser.java:284)
    at org.sonar.java.ast.JavaAstScanner.simpleScan(JavaAstScanner.java:92)
    at org.sonar.java.ast.JavaAstScanner.scan(JavaAstScanner.java:64)
    at org.sonar.java.JavaSquid.scanSources(JavaSquid.java:120)
    at org.sonar.java.JavaSquid.scan(JavaSquid.java:113)
    at org.sonar.plugins.java.JavaSquidSensor.execute(JavaSquidSensor.java:103)
    at org.sonarsource.sonarlint.core.analyzer.sensor.SensorsExecutor.executeSensor(SensorsExecutor.java:80)
    at org.sonarsource.sonarlint.core.analyzer.sensor.SensorsExecutor.execute(SensorsExecutor.java:71)
    at org.sonarsource.sonarlint.core.container.analysis.AnalysisContainer.doAfterStart(AnalysisContainer.java:134)
    at org.sonarsource.sonarlint.core.container.ComponentContainer.startComponents(ComponentContainer.java:125)
    at org.sonarsource.sonarlint.core.container.ComponentContainer.execute(ComponentContainer.java:110)
    at org.sonarsource.sonarlint.core.container.storage.StorageAnalyzer.analyze(StorageAnalyzer.java:75)
    at org.sonarsource.sonarlint.core.container.storage.StorageContainerHandler.analyze(StorageContainerHandler.java:81)
    at org.sonarsource.sonarlint.core.ConnectedSonarLintEngineImpl.lambda$analyze$0(ConnectedSonarLintEngineImpl.java:152)
    at org.sonarsource.sonarlint.core.ConnectedSonarLintEngineImpl.withReadLock(ConnectedSonarLintEngineImpl.java:344)
    at org.sonarsource.sonarlint.core.ConnectedSonarLintEngineImpl.withReadLock(ConnectedSonarLintEngineImpl.java:334)
    at org.sonarsource.sonarlint.core.ConnectedSonarLintEngineImpl.analyze(ConnectedSonarLintEngineImpl.java:149)
    at org.sonarlint.eclipse.core.internal.server.Server.runAnalysis(Server.java:306)
    at org.sonarlint.eclipse.core.internal.jobs.AnalyzeConnectedProjectJob.runAnalysis(AnalyzeConnectedProjectJob.java:72)
    at org.sonarlint.eclipse.core.internal.jobs.AnalyzeConnectedProjectJob.runAnalysis(AnalyzeConnectedProjectJob.java:1)
    at org.sonarlint.eclipse.core.internal.jobs.AbstractAnalyzeProjectJob.run(AbstractAnalyzeProjectJob.java:405)
    at org.sonarlint.eclipse.core.internal.jobs.AbstractAnalyzeProjectJob.runAnalysisAndUpdateMarkers(AbstractAnalyzeProjectJob.java:208)
    at org.sonarlint.eclipse.core.internal.jobs.AbstractAnalyzeProjectJob.doRun(AbstractAnalyzeProjectJob.java:170)
    at org.sonarlint.eclipse.core.internal.jobs.AbstractSonarProjectJob.run(AbstractSonarProjectJob.java:45)
    at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)

Solution

The solution is to provide a scheduling run when a job is scheduled. This rule should come from the root workspace that you can get via org.eclipse.core.resources.ResourcesPlugin.getWorkspace().getRoot(). You can then use the org.eclipse.core.resources.IWorkspace.getRuleFactory() to create a rule. I am no expert but I think the org.eclipse.core.resources.IResourceRuleFactory.copyRule(IResource, IResource) rule would be the best since you are only reading.

I did not look into your code in detail but becoming part of the build and making an incremental builder would probably be simpler & faster.

I tried to create a PR. However, the sonar-java on Github had test errors:

[ERROR] Failures: 
[ERROR]   CallToDeprecatedCodeMarkedForRemovalCheckTest.test:34 No issue raised. At least one issue expected
[ERROR]   CallToDeprecatedMethodCheckTest.flagged_for_removal_should_not_raise_issue:46 Unexpected at [19, 23, 28, 39]
[ERROR]   AssertJChainSimplificationCheckTest.testJava11Cases:42 No issue raised. At least one issue expected
[ERROR]   GeneratedCodeFilterTest.test:34 Line #10 has been marked with 'NoIssue' but issue of rule 'java:S100' has been accepted!
[ERROR]   SuppressWarningFilterTest.verify:48 Line #184 has been marked with 'WithIssue' but no issue have been raised!
[INFO] 
[ERROR] Tests run: 962, Failures: 5, Errors: 0, Skipped: 0

If I can build the Github repositories I can prepare a PR for you.

1 Like

Hello Peter,

First of all, welcome to the community and thanks for raising this issue! Let me summarize where we are today.

Using a rule is what we used to do at some point, but it was giving an impression of overall slowness because it was preventing other jobs to run in parallel. And SonarLint analysis can take some time to execute on large files. So with this ticket that we implemented here, we decided to remove the rule mechanism.

“Becoming part of the build” is something we already consider (see here) if this is what you mean. We led an experiment about that but it has not been integrated for the moment.

As a possible workaround for the moment, you can ask your customer to deactivate the “Run SonarLint automatically” option in the project properties. Of course this is not ideal but we need to take some time to think about the right way to solve this problem. We will keep you informed here.

Regarding the unit test failures, I tried to reproduce both on Linux and Windows but all the tests pass. How did you launch the tests ? Was it on Windows ? Anything particular in your setup ?

Thanks again
Damien

1 Like

Hello Peter,

Would you be able share a project that would help us reproduce the problem ? Maybe a link to a project on GitHub or an archive on a file drop ?

Thanks
Damien

First thanks for the quick reply! Very nice to see how this is taken up.

There are open-source projects but not sure this would be that useful? The problem is clearly that Sonarlint writes a JAR file that is being build. We had a similar problem with Groovy that forgot to close its class loader.

From your description and the patch from 2017 it is clear that you are reading these files during the build process. Bndtools will run into this problem more often because we tend to always write the JAR file for every build. However, it is not deterministic because you tend to keep the files open for a short time and after a save (which tend to trigger a build). So even if you do not see it in an open-source project, the problem is still there? I can give you a project (e.g. https://github.com/AlloyTools/org.alloytools.alloy) but I expect it will take some time for you to get it all working and the absence of this occurring is no proof.

Since this problem is happening more often on Windows (We found bugs in Eclipse indexing & Groovy) we have built-in support. In this case, we will keep trying to write the file for 5 seconds, which usually suffices. However, that is not a good solution of course.

I think the only solution is to obey the Eclipse rules and reinstall the build rule. This is the only way to be sure you’re using the correct artefact. Theoretically, you could actually use the wrong JAR when you’re analyzing if you do not follow the build exclude rule.

I guess if you could participate in the compiling process that would save a lot of processing time since then the AST has to be made only once.

For the errors in the build, this was one of the errors:

-------------------------------------------------------------------------------
Test set: org.sonar.java.filters.SuppressWarningFilterTest
-------------------------------------------------------------------------------
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.607 s <<< FAILURE! - in org.sonar.java.filters.SuppressWarningFilterTest
verify  Time elapsed: 0.607 s  <<< FAILURE!
java.lang.AssertionError: Line #200 has been marked with 'NoIssue' but issue of rule 'java:S1874' has been accepted!
        at org.sonar.java.filters.SuppressWarningFilterTest.verify(SuppressWarningFilterTest.java:48)

-------------------------------------------------------------------------------
Test set: org.sonar.java.checks.CallToDeprecatedMethodCheckTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.327 s <<< FAILURE! - in org.sonar.java.checks.CallToDeprecatedMethodCheckTest
flagged_for_removal_should_not_raise_issue  Time elapsed: 0.153 s  <<< FAILURE!
java.lang.AssertionError: Unexpected at [19, 23, 28, 39]
        at org.sonar.java.checks.CallToDeprecatedMethodCheckTest.flagged_for_removal_should_not_raise_issue(CallToDeprecatedMethodCheckTest.java:46)

If you need all 5 of them, let me know.

Hope this helps, kind regards,

Peter Kriens

Is there any progress on this issue? We built in a delay in our software to retry the write but sometimes Sonar holds the file lock over 2 minutes. This must also cause problems in other build environments.

BTW, my customer is SMA and they are a paying customer. It would really be nice if the analysis job made sure it did not interfere with the build, even as an option.

Hi @pkriens

I guess you mean “read” here.

I hope this is not the only solution, because I doubt we will reintroduce the rule. SonarLint is not supposed to “lock” the workspace at any time. We are not too concerned about “atomicity” of analysis, because any concurrent change that would occurs while an analysis is already going will trigger another analysis just after.

This would be at the cost of a tight coupling with JDT, and we would have to adapt with all different versions of Eclipse. For now we are better embedding our own parser (which is a bleeding edge version of ECJ today, but it has not always been the case).

Coming back to your issue, I think we could try to do something different. Currently we are collecting the entire project classpath files, and passing it to our Java analyzer, that will indeed create a classloader and so lock the files on Windows.
I don’t think it is practical to create a temporary copy of all JARs every time we do an analysis, but we could maybe try to be smarter, and decide based on if the JAR is part of the workspace or not.
I will look at your example project to see how it look like.

Well, you’re primarily not supposed to step on someone else’s toes! :slight_smile: You normally schedule outside the build to prevent this from happening.

  • You only have to do this on Windows since other OS’es do not lock open files.
  • Since most buildpath files do not change, and many are shared between projects, you can easily cache these copies to minimize the copying overhead.
  • If you become part of the build, you can execute the cached copy operating during the build, guaranteed not to disturb the build itself. The actual analysis can then happen on another cancellable job. If the build is restarted, you cancel the analysis job.

Let me know if I can help.

Builds happen frequently, and can be quite slow on some setup. Same for analysis (especially full project analysis). If we prevent them running concurrently, the user experience tend to be very bad (we even had deadlock).
In a perfect world, we should be able to quickly capture a consistent and atomic “snapshot” of the entire context, then run the analysis. Would a build occurs meanwhile and make the “snapshot” outdated, we would just schedule a new analysis.

So what is stopping you, that perfect world is in reach, isn’t it? The incremental builder interface allows you to quickly see the resource delta since the last build. This makes it trivial to snapshot & analyze only what changed in another job as well as cancel any job that was already analyzing. This will (in the end) happens in dependency order, which I think will significantly reduce the number of jobs that will need cancelling. Plus you get the optimizations on the Eclipse side to minimize building.

Realize that you’re getting your speed now because you are running through a red light :slight_smile:

PDE, Bndtools, and many others are using the build. As a user of these, and author of Bndtools, I’ve never seen deadlocks in the builder. They must have been caused by how you interact with the scheduling system?

I do admit that it took me years before I ‘got’ the Eclipse build architecture and I think we started off quite badly. There is a surprising lack of documentation and examples out there.

Let me know how I can help …