Quality gate fails unexpectedly for old code

I have setup our workflow to include sonar as follows:

  • change are made on a feature branch

  • a pull request is used to merge them to the development branch which is covered by a sonar quality gate

  • To make a release we create a pull request from the development branch to the master branch.

  • I have also added the same sonar quality gate to master.

The issue that is puzzling me is that I can make a single feature change and it will pass the quality gate for merging on to development but then it fails the quality gate for merging onto master
when in fact there is no further change.

The new code definition is at 30 days.
And yet the quality gate fails due to too many code smells for issues which sonar itself dates as “3 years ago”. The last commit to the files was also a few months ago.

The behaviour I want from this setup is that a merge from development to master should only fail the quality code in the unlikely event that a problem is caused by the combination of two or more changes accumulated on the development branch being applied to master. That is clearly not the case here.

ALM used: Azure DevOps
CI system used: Azure DevOps
Languages (in this case): go

I think I have observed this problem for other projects I have onboarded for the first analysis of the master branch which may be significant. However, this is not the first analysis for this projects master branch.
My workaround has typically been to fix the quality issues even though they are not new. However, I have to onboard a large number of projects now so instead I find I have to make the quality gate optional in Azure DevOps and basically ignore the failure. This defeats the objective of having it.


Can you provide some details?

First, are we talking purely about new issues raised on old code, or are there things like coverage and duplications in the mix?

And if it’s about issues, can you give some concrete examples?


In this case it was code smells raised on old code.
Our quality gate requires an A rating for new code but got a C because 86 code smells were found. All of these issue relate to code “3 years ago” according to sonar.
The break down by rules is:

(Go) String literals should not be duplicated 69
(Go) Track uses of “TODO” tags 13
(Go) Cognitive Complexity of functions should not be too high 2
(Go) Functions should not have identical implementations 1
(Go) Track uses of “FIXME” tags 1

Interesting under the creation date filter under “new code” I see a histogram:

47 issues - 2019
36 issues - 2020
0 issues - 2021
3 issues - up to Sept 2022

So the question is why are issues from 2019 and 2020 being counted as new code?

Under the admin menu the new code definition is clearly set to 30 days.


Thanks for the details. Sometimes changes in new code can cause issues to be raised on old code.

As an example, a new ‘null pointer dereference’ may be raised on old code if I delete (or invalidate) the null-check before the dereference. Similarly, if I add a string literal use, that could easily cause a new “duplicated string literal” issue on the (old) first use of the literal. It would be a similar story for “functions should not have identical implementations”.

Same for Cognitive Complexity; that rule raises an issue on the (presumably old) method declaration rather than on the new code that bumped the method over the limit.

The “TODO” and “FIXME” are harder to explain away, though. Would you mind sharing a screenshot of one of these issues?

And maybe also a screenshot showing your failing QG conditions and one of your QG details?


Here is a link to the project itself if you are able to see it:



Thanks for the screenshots and the link. Unfortunately, I’m not able to access the project, but that’s probably fine.

What I’m not understanding is that in the context of a Pull Request, you’re seeing issues marked on non-new code Yes, I know that’s what you said to start with. :slight_smile:. I didn’t understand that the context was the pull request. I thought you were seeing the behavior after merge.

Can you share the analysis log for this PR? I suspect something’s going wrong retrieving the blame data - which is what’s used to identify “new” lines.


sonarlog40.zip (586.4 KB)


Thanks for the log. Nothing’s jumping out at me, so I’m going to flag this for more expert attention.


Here is the git blame for the smell I showed in the screenshot:

3bf93102 (Bruce S O Adams 2019-07-16 17:44:52 +0000 921) // @todo blah redacted blah
3bf93102 (Bruce S O Adams 2019-07-16 17:44:52 +0000 922) /*

So you can see it is 3 years old / 2019 and not “new code” in that sense.

1 Like

I am having a possibly related issue with another project but for this one I notice the azure dev-ops runs:

git remote add origin somerepo/somewhere
git config gc.auto 0
git config --get-all http.somerepo/somewhere.extraheader
git config --get-all http.extraheader
git config --get-regexp .*extraheader
git config --get-all http.proxy
git config http.version HTTP/1.1
git --config-env=http.extraheader=env_var_http.extraheader fetch --force --tags --prune --prune-tags --progress --no-recurse-submodules origin --depth=1 +0eca4ea1d6dc319026d0ea0c29ae083589aac994:refs/remotes/origin/0eca4ea1d6dc319026d0ea0c29ae083589aac994

And sonar includes warnings for the analysis:

Could not find ref 'develop' in refs/heads, refs/remotes/upstream or refs/remotes/origin. You may see unexpected issues and changes. Please make sure to fetch this ref before pull request analysis.

Shallow clone detected during the analysis. Some files will miss SCM information. This will affect features like auto-assignment of issues. Please configure your build to disable shallow clone.

For this case I have not yet configured the sonar project (due to a permissions issue). That includes setting it up to talk to azure devops which presumably allows it to grab the extra git information it needs
For the project failing we have been discussing the git step is:

git remote add origin somerepo/somewhere
git config gc.auto 0
git config --get-all http.https://somerepo/somewhere.extraheader
git config --get-all http.extraheader
git config --get-regexp .*extraheader
git config --get-all http.proxy
git config http.version HTTP/1.1
git --config-env=http.extraheader=env_var_http.extraheader fetch --force --tags --prune --prune-tags --progress --no-recurse-submodules origin

As discussed previously sonar seems to report the correct blame information but perhaps it gets that separately from the analysis of a particular build?

This new project does not have the correct blame info yet and seems to think the entire codebase is new. I am hoping it will magically correct itself once I have configured the project correctly.

I now have the permissions issue resolved but the new code definition isn’t code.

For one project I have 9.8K lines of new code for a pull request where the main change I’ve made are to alter the build to perform sonar analysis. If I run the sonar analysis manually it says much more plausibly that I’ve changed 157 lines (I altered some code to improve other quality gate metrics).

I can’t see a way to fix this. Please help.

The shallow clone part of the issue is microsoft’s fault - steps.checkout definition | Microsoft Learn

New pipelines created after the September 2022 Azure DevOps sprint 209 update have Shallow fetch enabled by default and configured with a depth of 1. Previously the default was not to shallow fetch.

This caught me out due to having starting onboarding some projects before the change and some afterwards.

The solution to that is in the azure-pipelines.yml:

      - checkout: self
        fetchDepth: 0

The default fetch depth was previously zero and has been changed to 1 (shallow).
I’m not clear how this could have been done without retroactively affecting existing pipelines.
As existing projects will not have this entry. Azure perhaps has hidden defaults for projects?

This leaves me with the wrong blame information issue I initially described. I will attempt adding fetchDepth to that project to see if it helps.


Hi Bruce

Thanks for keeping us in the loop. I can verify that this shallow clone is blocking us to deliver the right blame information.
Please let me know your further findings if applying the fetch depth flag helped.


The newer project provokes a warning about a shallow clone from sonar whereas the other didn’t.
Are you able to discern from your side whether there are cases where an analysis could fail to produce such a warning?

Unfortunately I can’t easily restart the merge that demonstrated it as it has already been completed.
It might be possible if I create a clone of the project and undo the change and try to reapply it. It requires a bit of jumping through hoops and time which I may or may not be able to justify.

The default for azure was a deep clone when the project was first onboarded but it is not impossible that it became shallow for the pull request. I would expect the sonar analysis to have detected that however.