Code coverage for the current build versus the build triggered previous minus last build


I’m able to generate the code coverage status report for the current build versus the immediate previous build. But I’ve a scenario to have the code coverage report for the current build versus the previous build minus one build (i.e. build 15 vs build 13 ) & also, a way to have stats for current build versus build happened few months ago (i.e. build 15 vs build 2)

Thank you.


Welcome to the community!

Take a look at the api/measures/search_history web service to get historical metric values.


Hi, @

@vinayreddyp87 and I are working on the same project.

We aim to get the delta code coverage between 2 points in time.

My team uses Java + Jacoco + Maven + SonarQube + GIT

Our project has 2 test scenarios.

  1. JUntis done on CI.
  2. Integration, GUI and manual testing is done in a different environment post artifact is produced and deployed to Artifactory.

So, whenever we do our CI build we get the code coverage as well “Coverage on the new code” whatever JUnits can cover. BDW leak period is set to “previous_version”. Once an artifact is built we install it in a testing environment where we do GUI testing, manual testing and perform some integration tests along with Jacoco(attached as Java Agent). Now we have two .exec files(Jacoco Binary dump) one from CI and one from the testing environment. These two can be merged using the Jacoco.

Let’s say I got 40% code cover coverage from the CI by Unit testing and 30% from the manual testing and total coverage of 55% using both testings.

How can I get that 55% code coverage on the SonarQube Dashboard and also “code coverage on the new code”?

In one month 1000 lines were added out of which 700 and 500 were covered by Unit and integration testing respectively and 800 distinct lines were covered. I want to get 80% “coverage on new code”.

We need to get this number every month and in between, there are many CI builds meaning many sonar scans.

We can maintain different sonar projects to achieve them.

I thought/tried of these ways.

  1. Compile the source code, perform Junits, “jacoco-ut.exec” is produced under target\site\jacoco-ut.exec which is uploaded to SonarQube Server to calculate coverage. I can merge this file with the second testing’s jacoco dump. Now, I’ll do a sonar scan which gives me the total code coverage from both testing scenarios.

Now to get “Coverage on the new code”.

I can think of a hack where I do a sonar scan on one-month-old code and pass the merged jacoco dump(Let’s assumed that I preserved those files) to get the total code coverage. Now I update my source code to latest and pass the jacoco dump of the latest testing to the SonarQube server to get both “total coverage” as well as “Coverage on the new code”.

One limitation I have found is that when sonar scan is going on the SCM server should also be in the same state since SonarQube does a “git blame”. So, while I did a scan on one-month-old code my SCM was is the different state given that all are happening on MASER branch.

Please suggest some ways to achieve the “Coverage on the new code” easily since there are blockers to achieve it the ways I mentioned above.


Thanks for providing a detailed scenario. I see multiple questions here. As a best practice we try to keep it to one question per thread because otherwise it can get very tangled and difficult to know what’s answered satisfactorily and what’s still outstanding. Nonetheless, I’ll do my best here, but if you have followups on multiple questions I do ask that you open new threads.

For this you’re going to need to do all your testing before analysis runs, and then feed your unified report into analysis. As a side note, support for the .exec coverage report format is deprecated; replaced by XML. And, you can easily import multiple XML reports without the need to run a consolidator. Details in the docs.

I think what this boils down to is that you can’t access historical values for Coverage on New Code and that’s messing up the ability to do the comparisons you want to do. If I’ve correctly interpreted that, then no those historical “on New Code” values aren’t going to be available because we don’t store them. We don’t see the point saying that the “Coverage on New Code was 73% a year ago”. On New Code values are about right now.

If you’re going to do this, you need to make sure you check out the code from a month ago. Analysis will read its blame data from the checked out code, so this is not about the state of the SCM server.


1 Like

Thanks a lot, @ganncamp for spending time here and replying back.

I will try this.

I have one more question: Does SonarQube provide any internal API (NOT THE HTTP REST APIs) from which we can consume for

  1. Diffing the two versions of codebase since SonarQube team have already solved that problem where it generates the “Coverage on new code”, we want to understand how the team has solved it. Since we can’t just blindly do the maths b/w added/modified lines of code since many code lines will be empty, comments, having just definitions.

I had a look at SonarQube source code and found some Java Class: ScmChangedFilesProvider but didn’t understand much.


Hi Subham,

We “solved” this largely by just getting the blame data from the SCM and paying attention to lines with dates after the New Code Period baseline analysis.


Yes, that should be the ideal approach but there are lot of edge cases for that.

Assume, between two versions of code in which 100 lines are added/modified and there are many lines which will have

  1. Curly braces {
  2. Method declaration public void myAwesomeMethod(Object o)
  3. Empty lines
  4. Comments //comments
  5. Annotations like @Override

which are ignored which calculating code coverage.

So, there are a total of 70 lines which can be used for code coverage and the rest 30 lines are of the above following nature.

Now out of these 70 lines, 56 are covered. SonarQube gives 56/70 i/e 80% “Coverage on the new code” not the 56/100 i.e 56% “Coverage on the new Code”.

So, we are interested in knowing how SonarQube is able to ignore those 30 lines which don’t contribute to code coverage.


Hi Shubham,

When a file is included in a coverage report, it’s not actually SonarQube that decides anything about what’s coverable / covered. We just use the report. It’s only when files are entirely omitted from coverage reports that we make any calculations and then we do omit blank lines and other non-code lines such as lines with only curly braces. Why? Because there’s nothing executable about {.


Hi Ann,

I am not able to understand a couple of things from your last reply.


I have a question here: How do you know what are non-code lines like


Let me take an example.

public void myMethod(MyObject myObject) throws MyCustomException1, MyCustomException2,
	// this function does magic
	boolean someFlag = false;
		int i = 10;
		if(myObject.getValue() > i){
	catch(Exception e){
		  Log.error(getClass().getName(), "someData", "Message", e);
	MyObject object2 = MyObjectFactory
		valueProvider,, "", false, true, null);

here how SonarQube knows that it should ignore line 1,2,3,4,5,7(try{),13 and 18,19(if they are not considered).

If there any regular expression or some other logic behind it or something which jacoco gives.

We want to implement that logic in some of our workflow.


For the JaCoCo logic, you’ll have to ask its maintainers. For SonarQube’s logic, here’s where it’s outlined: If you want the real specifics though, you’ll need to check the code.