Hey @Adam_Birem,
Thank you for reporting this issue. I understand it might seem like a lot of time but there is some non-trivial cache validation happening in the background between these 2 lines that eventually decides how long the analysis that follows is going to take.
[INFO] 13:57:20.447 The Java analyzer is running in a context where unchanged files can be skipped. Full analysis is performed for changed files, optimized analysis for unchanged files.
[INFO] 13:57:23.435 Server-side caching is enabled. The Java analyzer was able to leverage cached data from previous analyses for 1521 out of 3882 files. These files will not be parsed.
The short version is that we cannot only rely on SCM information alone to decide whether to reuse cached results for a file that has not changed. In your case, there seems to be 2360 files, including the one that actually changed, for which the cache cannot be reused. To ensure that the results can be used as they were cached, we also need to check that the .class files, that we rely on for accurate semantics are present AND have not changed compared to what was used on the base branch.
So let’s go through a couple of questions to see if there is something that can be done on your side to improve the situation.
- Was the code compiled before invoking the scanner? If the bytecode is missing then it might be an issue.
- Is the bytecode produced with the same configuration between the base branch and the PR? Is the compiler different?
- Do you experience the same issue if you remove
-T 1Coption from the analysis command?
Cheers,
Dorian