Not all new code showing up properly in SonarQube 7.0 build 36138


(Peter Graves) #1

I’m running a gradle build for Java code and using Jacoco for generating CodeCoverage.

Only ‘some’ new/modified code shows after a build/analysis:

  1. under Measures–Size–New lines (in my current analysis it reported 3 new lines and if I select it, it is associated with 1 source file, however 6 source files were modified in this build) However if I look at “Lines of Code”, the value changed by 9 between this analysis and the previous one.
  2. under Measures–Maintainability-On new code–Code Smells it did report 2 new issues and they are associated with 2 source files
  3. under Measures–Coverage–On new code all the metrics are at 0. However noted above, there is modified code, so if the ‘new’ metrics are 0, why didn’t my Quality Gate changed to Failed since the Gate’s Condition for “Coverage on New Code” is set to Error if coverage is < 75 ??

In my previous build, LOC report 100 lines of code changed and if I looked at the source file in SonarQube for some of those lines, it saw the new lines, and they were highlighted that there was no Code Coverage, but again my Quality Gate didn’t get flagged.

Can you please explain why I see these discrepancies when looking at the number of ‘new’ files and ‘new’ LOC and also why the Quality Gate is NOT being triggered?

Thanks


(G Ann Campbell) #2

Hi,

It’s a little difficult to follow what’s going on here. Do you have your leak period set to previous_build? If not, then your “new” metrics are usually going to cover more than one analysis.

Nonetheless…

  1. under Measures–Size–New lines (in my current analysis it reported 3 new lines and if I select it, it is associated with 1 source file, however 6 source files were modified in this build) However if I look at “Lines of Code”, the value changed by 9 between this analysis and the previous one.

Are you looking at new lines? new lines to cover? And how are you calculating the change in “Lines of Code”? You made an external record before analysis & then manually compared the difference?

  1. under Measures–Coverage–On new code all the metrics are at 0. However noted above, there is modified code, so if the ‘new’ metrics are 0, why didn’t my Quality Gate changed to Failed since the Gate’s Condition for “Coverage on New Code” is set to Error if coverage is < 75 ??

There is an exclusion in the QG when there are only a few new lines: SONAR-9352.

Ann


(Peter Graves) #3

My leak period is - previous_version and each analysis gets a unique version. When I tried

  • Are you looking at new lines? new lines to cover?
    YES

  • how are you calculating the change in “Lines of Code”?

    I looked at the actual Pull-request associated with the build to see what really changed and also looked in SonarQube on the Trend Chart for LOC and compared the 2 analysis builds used for the leak period. - "how are you calculating the change in “Lines of Code”?

So there still seems to be an issue with what SQ is showing for “new lines” in my case. Also, why are there files with “new lines == 0” on this page of files with ‘new lines’?

  • SONAR-9352.

Ahh. this makes sense and might explain why the Quality Gate wasn’t changed on the last analysis. However, the previous one had 100 lines of code added (per the LOC Trend Chart) SO it should have been tripped. Also, it is too bad that a warning message like “Some Quality Gate conditions on New Code were ignored because of the small number of New Lines” isn’t displayed when the new/change count is too low.


(G Ann Campbell) #4

Hi,

You’ve got a whole lot going on here, and without more context (when you’re talking about what you see in the interface, screenshots help) it’s difficult to keep up with you. Nonetheless…

Karmically, I deserve that boolean answer to my boolean question, but those are two different metrics. Which one are you using?

If you’re trying to compare what GitHub highlights as a changed line with what SonarQube marks as a “Line of Code” you’re bound to come out with different numbers. Additionally, and this is why I was asking which metric you’re using, there’s definitely a difference between “new lines” (which is all lines - comments, blank lines, and code lines) and “new lines of code” (which we don’t actually compute) and “new lines to cover”.

You’re seeing all files here because “[new] lines*” is a quantitative measure, not a qualitative one. If it were Coverage, for example, which is qualitative, then you could reasonably expect “perfect” files to be suppressed from the list.

Ann


(Peter Graves) #5
  • New Lines says 3

image

And only 1 files is listed with code changes on the right of this screen. However 6 Java files were updated in this build


(Peter Graves) #6
  • “New lines to cover” says nothing

image


(Peter Graves) #7
  • LOC increased by 9; which about matches the number of ‘new’ lines added into the code base. There are also a number of other ‘modified’ lines of code in the 6 files.
    image

(Peter Graves) #8

image


(Peter Graves) #9
  • By selecting New bugs and New Code Smells, the UI shows me 3 different files that have new issues. And these 3 files are different from the 1 file reported by “New Lines.” So now there are at least 4 files that have new/modified code. Having “New lines to cover” report 0, makes no sense.
    image

(Peter Graves) #10
  • by the way - should sonar.leak.period=previous_build still work in SonarQube 7.0? I don’t see this documented as a valid value anymore.

I’m hoping there is a simple answer here, :slight_smile:

Thanks

ps> sorry for the multiple replies, but that was the only way for me to add multiple screen shots.


(G Ann Campbell) #11

Hi,

Even with screenshots, this still feels a bit like apples versus oranges. Let’s focus on your measures:

  • 3 new Lines
  • 0 new Lines to Cover

These do not contradict each other. As I said, “lines” is all lines: lines of code, comment lines, blank lines, … So what I see when I look at these two metrics together is that 3 lines were added or updated, but none of them requires code coverage. BTW, you can have a line of code that doesn’t require code coverage. Think, for example, of an include.

The fact that you’ve set your leak period to previous_analysis may indeed be muddying the waters, and this is not recommended. Why? Let’s say I have a Quality Gate that specifies no new bugs, and my commit adds 3 new bugs. Result: broken quality gate! But that’s not a problem, I just re-analyze the same code, my bugs fall out of the leak period and the quality gate passes.

Another reason this muddies the waters is that the default housekeeping algorithms clean the project to one analysis per day after 24 hours, so your use of the timeline graph mouseovers to “prove” your inconsistencies is highly suspicious; there’s no way I can know for sure if we’re really looking at the analysis immediately before the one in question.

HTH,
Ann


(Peter Graves) #12

My leak period is - previous_version and not previous_analysis. Each analysis is a new Version. is Previous_build supposed to work in 7.0, so I don’t have to assign unique versions to each build?

Over the weekend I performed 2 analysis into a ‘new’ project between multiple 2 manually selected point in time so it would include a lot of changes.

  • LOC changed by 10645 and the “New Lines” number is 0 under the Size Measure - and yes I understand not all lines would require coverage, but there are definitely a number of lines in this set that do.

  • Coverage for new lines is still 0 (Lines to cover, uncovered lines, etc)

  • there are 283 New bugs and 709 New code smells that were identified. Should most of these new issues be also identified as ‘new/modified’ lines requiring Code Coverage? (e.g. NullPointerException, Use try-with-resources or close this “ByteBufferInputStream” in a “finally” clause Bugs)

What is even more confusing with these 2 analysis vs the ones from last week, is there were no ‘New Lines’ listed under the Size Measure, when this weekend’s analysis builds also included the same commits (and more) from last week (that I mentioned at the beginning of this thread)

Thank you for your patience :slight_smile:


(G Ann Campbell) #13

Hi,

Sorry for the misunderstanding about previous_version versus previous_analysis. Given the way you’re doing it though, they’re functionally the same.

Regarding the measures changes you’re talking about, I really feel I have too little to go on. I’m sorry, but what you’re reporting doesn’t hang together for me, so I’m sure there’s either something else going on or a fundamental misunderstanding and I just have too little to work with to diagnose anything.

Ann