I have setup our workflow to include sonar as follows:
change are made on a feature branch
a pull request is used to merge them to the development branch which is covered by a sonar quality gate
To make a release we create a pull request from the development branch to the master branch.
I have also added the same sonar quality gate to master.
The issue that is puzzling me is that I can make a single feature change and it will pass the quality gate for merging on to development but then it fails the quality gate for merging onto master
when in fact there is no further change.
The new code definition is at 30 days.
And yet the quality gate fails due to too many code smells for issues which sonar itself dates as ā3 years agoā. The last commit to the files was also a few months ago.
The behaviour I want from this setup is that a merge from development to master should only fail the quality code in the unlikely event that a problem is caused by the combination of two or more changes accumulated on the development branch being applied to master. That is clearly not the case here.
ALM used: Azure DevOps
CI system used: Azure DevOps
Languages (in this case): go
I think I have observed this problem for other projects I have onboarded for the first analysis of the master branch which may be significant. However, this is not the first analysis for this projects master branch.
My workaround has typically been to fix the quality issues even though they are not new. However, I have to onboard a large number of projects now so instead I find I have to make the quality gate optional in Azure DevOps and basically ignore the failure. This defeats the objective of having it.
In this case it was code smells raised on old code.
Our quality gate requires an A rating for new code but got a C because 86 code smells were found. All of these issue relate to code ā3 years agoā according to sonar.
The break down by rules is:
(Go) String literals should not be duplicated 69
(Go) Track uses of āTODOā tags 13
(Go) Cognitive Complexity of functions should not be too high 2
(Go) Functions should not have identical implementations 1
(Go) Track uses of āFIXMEā tags 1
Interesting under the creation date filter under ānew codeā I see a histogram:
47 issues - 2019
36 issues - 2020
0 issues - 2021
3 issues - up to Sept 2022
So the question is why are issues from 2019 and 2020 being counted as new code?
Under the admin menu the new code definition is clearly set to 30 days.
Thanks for the details. Sometimes changes in new code can cause issues to be raised on old code.
As an example, a new ānull pointer dereferenceā may be raised on old code if I delete (or invalidate) the null-check before the dereference. Similarly, if I add a string literal use, that could easily cause a new āduplicated string literalā issue on the (old) first use of the literal. It would be a similar story for āfunctions should not have identical implementationsā.
Same for Cognitive Complexity; that rule raises an issue on the (presumably old) method declaration rather than on the new code that bumped the method over the limit.
The āTODOā and āFIXMEā are harder to explain away, though. Would you mind sharing a screenshot of one of these issues?
And maybe also a screenshot showing your failing QG conditions and one of your QG details?
Thanks for the screenshots and the link. Unfortunately, Iām not able to access the project, but thatās probably fine.
What Iām not understanding is that in the context of a Pull Request, youāre seeing issues marked on non-new code Yes, I know thatās what you said to start with. . I didnāt understand that the context was the pull request. I thought you were seeing the behavior after merge.
Can you share the analysis log for this PR? I suspect somethingās going wrong retrieving the blame data - which is whatās used to identify ānewā lines.
Could not find ref 'develop' in refs/heads, refs/remotes/upstream or refs/remotes/origin. You may see unexpected issues and changes. Please make sure to fetch this ref before pull request analysis.
Shallow clone detected during the analysis. Some files will miss SCM information. This will affect features like auto-assignment of issues. Please configure your build to disable shallow clone.
For this case I have not yet configured the sonar project (due to a permissions issue). That includes setting it up to talk to azure devops which presumably allows it to grab the extra git information it needs
For the project failing we have been discussing the git step is:
As discussed previously sonar seems to report the correct blame information but perhaps it gets that separately from the analysis of a particular build?
This new project does not have the correct blame info yet and seems to think the entire codebase is new. I am hoping it will magically correct itself once I have configured the project correctly.
I now have the permissions issue resolved but the new code definition isnāt code.
For one project I have 9.8K lines of new code for a pull request where the main change Iāve made are to alter the build to perform sonar analysis. If I run the sonar analysis manually it says much more plausibly that Iāve changed 157 lines (I altered some code to improve other quality gate metrics).
New pipelines created after the September 2022 Azure DevOps sprint 209 update have Shallow fetch enabled by default and configured with a depth of 1. Previously the default was not to shallow fetch.
This caught me out due to having starting onboarding some projects before the change and some afterwards.
The solution to that is in the azure-pipelines.yml:
steps:
- checkout: self
fetchDepth: 0
The default fetch depth was previously zero and has been changed to 1 (shallow).
Iām not clear how this could have been done without retroactively affecting existing pipelines.
As existing projects will not have this entry. Azure perhaps has hidden defaults for projects?
This leaves me with the wrong blame information issue I initially described. I will attempt adding fetchDepth to that project to see if it helps.
Thanks for keeping us in the loop. I can verify that this shallow clone is blocking us to deliver the right blame information.
Please let me know your further findings if applying the fetch depth flag helped.
The newer project provokes a warning about a shallow clone from sonar whereas the other didnāt.
Are you able to discern from your side whether there are cases where an analysis could fail to produce such a warning?
Unfortunately I canāt easily restart the merge that demonstrated it as it has already been completed.
It might be possible if I create a clone of the project and undo the change and try to reapply it. It requires a bit of jumping through hoops and time which I may or may not be able to justify.
The default for azure was a deep clone when the project was first onboarded but it is not impossible that it became shallow for the pull request. I would expect the sonar analysis to have detected that however.
Thanks @KantarBruceAdams for getting back to us and raising awareness of this change. We have created a follow-up task to update the guidelines on our end and be specific on this setting.