Is it possible for SonarQube to miss things during a scan?

I am trying to figure out the cause of a quality gate failure and I think the situation I found is that during the 2nd scan of the file 1 new Bug was found that wasn’t found during the 1st scan of the file.

Here is what happened.

I scan was kicked off in the morning at around 10:00am. This was the 1st scan of a new git branch. For the Java file in question, there were 103 issues found during this 10:00am scan. I verified this by visually counting all the issues on that file that said “4 hours ago” in the history.

A pull request was merged into the branch and another scan started at 12:20pm. This was the 2nd scan of this branch. The Java file in question had NO CHANGES as verified by the diff of the pull request. However, after the scan was complete, there were now 104 issues and SonarQube reported a quality gate failure. When I click on “Reliability Rating on New Code is worse than A” it takes me to the Java file in question.

Screenshot 2021-11-29 141116

You can see that the history shows this issue was found “2 hours ago” vs. “4 hours ago” when all the rest of the issues were found during the 1st scan at 10:00am. In other words, during the 2nd scan which started at 12:20pm a new Bug was found in the code that wasn’t found at 10:00am during the 1st scan with no changes to the Java file in question.

This is concerning for a few reasons:

  1. No changes were made to the Java file in question
  2. This Bug was created during the 2nd scan at 12:20pm and missed during the 1st scan at 10:00am.
  3. It’s causing the quality gate to fail even though the Java file in question does NOT show up as New Code

So, any thoughts on this? Thanks!

Hi,

The file the issue was raised in didn’t change. What about the files that did change? Some issues are simple: you didn’t follow the naming convention on this line.

Some issues involve more than one line. For instance, a complexity issue is going to relate to the entire method, not just the method declaration, where it’s raised. Even if the method declaration hasn’t changed in years, it can have a - valid - brand new complexity issue if the code inside it has changed.

And then there are things like null pointer issues which could potentially be affected by changes in other files. In the files that did change, did a nullability indicator get added? Did a null test get removed? Did null get added somewhere as a possible null value?

Another thing to look at is the analysis logs. Were there warnings in the logs of the first analysis about missing class files that disappeared in the logs of the second analysis? When information is missing from analysis - i.e. without all the class files and libraries - yes analysis can miss things. And then when that information is available, “new” issues might show up.

 
HTH,
Ann

I am working setting up a UAT environment to run more tests. I’ll be able to answer your questions and provide more information after this. One thing I can say until then that this same behavior has occurred when the only thing that changed were Maven pom.xml files. No other source code was touched but the 2nd scan of the source code found more issues which then failed the quality gate. The offending files are shown when you drill into the quality gate failure, however, those files do not show up in Measures > Size > New Lines…only the Maven pom.xml files show up in Measures > Size > New Lines.

Hi,

Can I guess from that that libraries were changed? Because libraries do factor into analysis and that could certainly explain what you’ve experienced.

 
Ann

Nope, it was literally do one scan then right after it was complete run a second scan. No code, no libraries changed.

Hi,

What was the change?

 
Ann