I’m not quite sure what to make of the log messages. There are some MaximumStepsReachedException
, for one class there are three exceptions directly following each other, with differences of 150ms and 60ms; so not that long.
What we get a lot are messages where classes were not found, those have the form of
.class not found for java.lang.OurOwnClass
, or
.class not found for path.to.our.package.String
, or
.class not found for java
.class not found for java.util
(for packages nested more deeply this is logged for each package), or
.class not found for path.to.our.package.package-info
In the 10 seconds between progress report (INFO: N/M files analyzed...
) sometimes two, sometimes four or six files are analyzed, with the number of failed class lookups being unrelated.
Do you have any further advice on how to check what takes up the time?
This seems rather slow and seems that semantic is doing a lot of work : how is your project analyzed and how do you provide bytecode to the analyzer ?
Moreover : another interesting information in investigation could be to deactivate the custom rules to see if the culprit is among those rules : if you keep a reference to a tree between files you may end up with a memory leak which would put pressure on the GC.
Currently, we have about 750 individual modules that make up our full project. The sub projects are clustered into some generic categories such that we get a folder structure like
root/{category}/plugins/{module}
. Below each {category}
are also the plugins containing the test files; below each {module} there are the src
and bin
folders for sources and class files, respectively. Initially the the SonarScanner was configured as
sonar.source=projects/../plugins/../src/
sonar.java.binaries=projects/../plugins/../bin/
I don’t know why it was done this way and I don’t know why the Scanner found any source file. However, analysis was performed with our own binaries missing. It was corrected as
sonar.sources=projects
sonar.java.binaries=projects/*/plugins/*/bin
Currently, I’m unsure if we even correctly configured our multi-module project.
We analyze our project using the SonarScanner for Jenkins which uses version 3.0.3.778 of the SonarQube Scanner.
To provide further information, we have analyzed our project one time with custom rules and one time without custom rules. Unfortunately the culprit is not among those custom rules.
Can you provide some overall numbers:
- How many lines of code does this project have?
- How many files does it consist of?
- How long does the analysis take?
- What is the hardware specification of the machine used for the analysis?
- Do you set any performance tuning related parameters (e.g. JVM memory) for the analysis?
Then to go a bit more into the details, can you have a look at the analysis logs and identify which stage of the analysis is taking up most of the time? If you are able to share the logs, I’d be happy to help with that (feel free to send me a direct message).
- around 3m lines of code
- it consist ~30k+ files
- the analysis takes around 62h and failing at the end.
Between the 10 seconds info report we are receiving multiple "Warn: Too many duplication references on file myFile.java for block at line x. Keep only the first 100 references."
The longer the analysis takes the more warn messages appears. Could this appears due to a wrong configuration?
We are using the following Configuration:
sonar.projectKey=MyProject
sonar.projectName=MyProject
sonar.projectVersion=1.0
sonar.sources=.
sonar.java.binaries=**
bin
java.libraries=D:/jdk/lib/*.jar,D:/jdk/jre/lib**
.jar,**_if.jar
sonar.showProfiling=true
sonar.sourceEncoding=UTF-8
-
Processor: Intel Xeon CPU @ 3.60Ghz
RAM: 64 GB -
We didn’t set any performance tuning related paramters for the analysis.
Version
Sonar Plugin api 6.7
Sonar Java Plugin 5.1.0.13090
Sonar Scanner 3.2.0.1227
Can you try the following:
- Set the
sonar.cpd.exclusions=**/*
property to skip the duplication analysis entirely (this should remove allWarn: Too many duplication references on file
messages). - Update your SonarJava plugin to the latest 5.9.1 version
If the problem persists after those 2 changes, it would be helpful if you could share the original logs of the analysis for further investigation.
Thank you for sharing the analysis logs privately. It indeed seems that around 6-7 seconds is being spent per file, which is unusually high.
To help pinpoint the root cause, can you relaunch an analysis with all Java rules disabled? You can create a different quality profile and disable all rules there, and then set that project to use this new quality profile.
With this, we’ll be able to tell if the problem is coming from the rules or from the earlier stages of the analysis.
Thank you again