Code smells reported for lines not changed in merge request

We are running SonarQube Developer Edition Version 9.2.4 (build 50792)
The analysis is triggered via merge requests through our self hosted GitLab, which is currently at version Enterprise Edition 14.6.0-ee

Not sure exactly how much information is required, but the point is that the analysis that runs on merge requests is reporting code smells on lines that are not changed by the merge request. What I’ve done in this MR is to add a bunch of functions at one place in the file, which largely consist of code that was copied from one really big function further down in the file. And in that really big function, I replaced two chunks each by a single line, calling a respective one of new functions I added above. The view in gitlab shows this as one huge chunk of added code, then about 56 lines of unchanged lines, before two large chunks of code that were replaced each by a single line. These chunks now being separated by 3 lines of unchanged code.

However, it seems like the revision detection in SonarQube is detecting completely different lines as being changed by this commit. Notably, those chunks of 56 and 3 lines of unchanged lines that I mentioned, are being considered as part of one large chunk of lines that is part of the revision for the commit I’ve added. There are code smells in these lines, which is being reported.

It’s almost as if SonarQube is running its own separate git diff for some reason, which detects different parts of the code as unchanged. It seems like spiritually the problem is that it spiritually sees that one chunk of code was moved above another chunk, but when evaluating that together with the other changes, it considers the wrong chunk to be moved downwards, instead of correctly considering the correct opposite chunk from being moved upwards.

I think I would much rather have SonarQube report code smells, from the code that is effectively unchanged and moved into another new function, as new code smells. That seems more cohesive than getting reports from actual unchanged code that just happens to have moved relative to the code that I moved into a different function.

Hey @torgeir.skogen

SonarQube should be smart enough to handle this. SonarQube should only raise issues on changed lines, and not erroneously raise issues on lines that have just been moved and that git itself wouldn’t recognize as changed.

Would you be able to create a small reproducer (full of dummy code, that’s fine) where this isn’t working as expected? All we need is two versions of the same file. If you’re comfortable sharing the real files, I can raise a private message.

I’m not comfortable sharing the file as is, I would if I could obfuscate the names used in the file, but that seems like too much work for what it’s worth.

Anyway, I have some more philosophical points to bring up. I figured that maybe the reason why SonarQube has to run its own diff algorithm is that we are using a reference branch for these analyses, and the curren’t branch isn’t necessarily rebased on the reference branch in all cases.

And maybe SonarQube is using a different git diff algorithm compared to the algorithm that was used when creating the commit(s), which could potentially be a contributing factor here. I’m not an administrator on the instance we are using in my organization, so I can’t easily check whether there are settings for which algorithm to use.

Also, I think fundamentally in git, a commit that refactors a big function by splitting it into smaller functions, will on some conceptual level have multiple sections of unchanged code, some being moved up and some down relative to unchanged code. I think that in actuality, when you move a chunk of code, git will detect a chunk as deleted from one place and then add in another location, I’m not versed enough in git to confirm or deny whether git then detects that the same chunk of code, or part of the same code was moved, rather than deleting and adding separate chunks. But regardless, I think that necessarily, when moving chunk B above chunk A, it is arbitrary whether that is stored as B being moved above A or A being moved below chunk B. Maybe this topic is just on that edge case where a different algorithm was enough to tip the scales. It’s hard to tell what SonarQube has done here in terms of revision detection, because as far as I can tell, I can only see the revisions as annotations in the file view, like a git blame, but but it would be easier to see which lines are detected as being part of that revision if there was a revision based view.

Anyway, the reason why I was originally surprised by why SonarQube would need to run its own diff on the code to detect new code, is that I maybe don’t have the correct mental model for exactly which state of the reference branch that is used as the baseline for new code. Like I discussed above, since one might run an analysis on a branch without having rebased it on the latest version of the reference branch, there might be arbitrary differences between the current state of the reference branch and my branch. However, I’m realizing that a better approach, seems to be to use the latest common parent with the default branch as the reference point for new code. Considering that SonarQube is analyzing the full project regardless and “new code” is only used to determine whether issues should be reported or not, then it appears arbitrary whether an analysis was run or not on the reference branch at the point that is the baseline for “new code”. Anyway, the point is, I thought that it always is using the latest analysis from the reference branch as baseline for “new code”, but in actuality, it seems a lot more reasonable to use commit I branched off from, or the commit I have rebased on, and then use git blame to find lines that were changed since that point. And I thought that you didn’t need to run your own diff to achieve that.

In broader terms, maybe this is coming up due to a broad category issues that arise because the system is detecting “issues in new code”, rather than “new issues”. It seems like it would be possible to compare the analysis result with a previous analysis and use some fuzzy logic to detect which issues in the new analysis were also present in the old result.

Hi,

Are you running a pull request analysis or using the “reference branch” new code period?

Pull request analysis loads the diff from git and should always match what you see on GitLab (or at least the same as you’d see with git diff) unless there’s a problem with the references to the branches in the git clone used in the analysis.

The “reference branch” New Code Period strategy is not accurate in some situations, like when the branch is not rebased, as you mentioned. We are actually currently working on a fix, and it will behave similarly to pull request analysis: https://jira.sonarsource.com/browse/SONAR-14929

In broader terms, maybe this is coming up due to a broad category issues that arise because the system is detecting “issues in new code”, rather than “new issues”. It seems like it would be possible to compare the analysis result with a previous analysis and use some fuzzy logic to detect which issues in the new analysis were also present in the old result.

We used to have that strategy. SonarQube does have algorithms to “track” issues.
Because of that, for example, when you edit an issue in a pull request, that change should be kept after re-analyzing the pull request. Also we are able to not show issues that are already known in the target branch (meaning not being introduced by the PR), even if the corresnponding code was changed.

Since we are now relying on git to figure out what code changed, it results in less noise to only show issues that doesn’t already exist in the target branch and that is located on a changed line.

Interesting, I wasn’t aware of this difference, but apparently we’re using pull request analysis. But is there any reason why that difference has any bearing over how “new code” is detected.?
I would this video about the difference between git diff algorithms, which like I’ve been saying, seems relevant for what I’ve observed.

Maybe it’s also worth it to bring up this point; We have configured gitlab so that our projects require a semi-linear history to to allow merge requests to be merged. And since we also run a sonarqube analysis when pushing to a branch with an open merge request, another analysis has to be run later if my branch isn’t rebased on the latest commits in the target branch. With that in mind, if sonarqube doesn’t have to use its own git diff if it would compare against the fork point rather than against the latest commit of the target branch, then I think that might be a win in terms of this problem.

Hello All,

any update on this issue ?

we have the same issue where as part of our pull request pipelines which runs on every commit or gets triggred by checking out the merge commit,sonarqube reports on unchanged lines, also in sonarqube dashboard i see that it is highlighting lines which are not changed as part of the pr, this is a strange behaviour, intially we thought this could be a bug related to specific version of sonarqube so we have moved to latest sonarqube 9 but stilll see the same issue.

issue is that sonarqube detects old code as new code that is being changed in the pull request and reports issues on that which is not correct behaviour i believe , ofcourse the issue reported is a genuine one but it should only be on changed lines for PR analysis

We also have this issue with SonarCloud so it seems to be a problem inherent to the platform. I’d be curious to know if there is any sort of fix for this behavior as it makes the platform almost unusable for large merge requests.