How to leverage Maven POM profiles while maintaining code coverage

  • SonarQube Developer Edition
  • Version 8.9.6
  • Java 8 environment
  • Jenkins CI/CD
  • High security so I only have access to the Java project’s POM files and the Jenkinsfile


  • Maintain Java code coverage analysis when using Maven and replacing the POM’s … element by two (2) profiles. So a “reducedBuildTimeProfile” and a “faullBuildProfile” that contain all of the that would otherwise be in the eliminated element block.

Thus far I have:

  • Confirmed that SonarQube analysis works as expected prior to removing the elements from the respective POM files
  • Created the POM profiles both with and without the <sonar.profile>profileName</sonar.profile>
  • Using any of the profiles without the modules element results in the number of lines analyzed dropping from the order of 100k to less than 200

I’m wondering (1) is this even possible; (2) how to best debug & correct the root cause; and (3) identify alternate approaches that meet the objective. This is a large, monolithic Java application that can’t be refactored.


Welcome to the community!

I’m not understanding what you’re trying to accomplish.

You’re trying to make analysis go faster?
You’re trying to analyze with / without coverage?
Something else?



Thanks for your greetings and reply.

I’ll try rephrasing. It is a longer answer.

I need to have SonarQube analysis work on the entire code base when I reduce the application build time through the usage of profiles in several of the application’s POM files. It isn’t about SonarQube’s analysis time.

However, the SonarQube analysis no longer works when I use POM element to reduce the build time or to build the complete Java application.

What causes the problem … maybe:

To manage the build time, I created 2 profiles (reducedBuildTime, and fullApplicationBuild) in several of the application’s POM files. Because I am not allowed to refactor the application, I control the build time by limiting compilation; so all the modules/sub-modules versus a subset. I also remove the element from the respective POM files so that compilation is totally determined by the 2 profiles.

This causes the number of line of code seen by SonarQube to drop from 100k to less than 200. Additionally, the Code page of SonarQube no longer displays the modules that I’ve put into the profile. Causing me to belief that SonarQube doesn’t (can’t?) see the modules identified in a profile.

My assumptions were:

  • If the application builds correctly, then those profiles will be used by SonarQube analysis to determine code coverage.
  • That there is a way to pass the name of those profiles to SonarQube
  • That replacing the element (the list of modules to build) by profiles doesn’t negatively impact SonarQube’s ability to see all of the code base’s modules and hence the total lines of code

Naturally, I haven’t found a way to confirm/refute my assumptions.

So the issue is that SonarQube analysis doesn’t seem to support reducing the compile/build time of a Java application when accomplished using profile elements. Or that I can’t find the appropriate mechanism to accomplish this.

I hope that this clarifies my objectives/approach/needs.



Hi David,

Let me start by saying my knowledge of Maven is very shallow, and I’m not able to comment coherently on profiles.

SonarQube analysis identifies what code to analyze from the Maven environment. If you’re reducing build time by narrowing what Maven’s view of “source code” to just a subset, then that will definitely have an impact on analysis.

Additionally, analysis uses both the code and the class files, so you won’t get a full/thorough analysis of any uncompiled code.

Ehm… SonarQube goes from the coverage reports you submit to analysis. The caveat is that if your coverage report includes source files that aren’t under analysis, they’ll essentially be tossed out during report processing.

I know this answer isn’t as comprehensive as you were maybe hoping for. But does it help?


Hi Ann,

Currently my knowledge of SonarQube is at the newbie level … and I inherited the environment too

Perhaps my line of questions should be more like this:

  • How can I control the generation of coverage reports? (The ones that are sent to analysis.)
  • What are the options for this coverage report generation and where do I find the documentation?

I understand your comments, however I believe that the lines of code (LOC) shouldn’t be at the 100
-ish level … more likely around 1/2 of the application is still built/compiled.

So I think that the change has to do with how/which profiles I provide to Maven … and how I can control which modules are analyzed …

My build time reduction approach is not a desirable one, but the client’s constraints upon me prevents a more properly engineered approach.

So I’m hoping that my post will elicit some information beyond that which I’ve found and tried thus far.


Hi David,

Can you post your full analysis log?


Unfortunately no. Client security and confidentiality.

Perhaps my line of questions should be more like this:

  • What should I seek in such files?
  • How can I control the generation of coverage reports? (The ones that are sent to analysis.)
  • What are the options for this coverage report generation and where do I find the documentation?


Coverage reporting is really a different topic than the volume of code analyzed. The docs may help.

For the logs, feel free to redact as necessary. As to what you should look for in them, it’s difficult for me to sum up since this is a novel situation. Do you see the modules you expect being processed during analysis. How many files are found in each module (or in the overall project). Are files included/excluded?