New Code determination based on “Previous Version” - failures lost if you bump the version again

  • SonarQube 9.4 Enteprise, dotnet sonarscanner 5.6.0

I have a project for a component library where we want to determine new code based on the previous version - but it seems you can “defeat” this by incrementing the version again (a manual process).
I did this, all on the same (main) branch:

  • Version 1.2.3 analyzed => this is clean
  • Introduce a problem that would fail the new code quality checks
  • Analyze this as version 1.2.4 => Sonar correctly reported a failure becauce the new code is bad
  • Analyzed again (no changes) but upped the version to 1.2.5 => Sonar passed this because the bad code is no longer “new” and the “overall code” rules are more relaxed, essentially for “legacy” code.

So it seems like a workaround if you produce bad code and are not willing to fix it is to simply update say the 3rd digit (we maintain the version file manually which is then read by our CI system) so now your bad code version is considered “old” and you can get past the more stringent cleaner code rules

Is there a way to make Sonar treat all code since the last successful (green) analysis as new and therefore any analyses will fail based on the “new code” gates?
I had hoped that basing on previous version would mean it would behave in this way - essentially “last green version”

We do not want to go with a time based approach because that means for example setting to 30 days, the code is then “good” for a long-lived branch which is 31 days old.
We do not have a “prod” branch that could be a base branch for comparison - this is a library project and not an application where “prod” branch as the base might be more appropriate.



You’re exactly right. In fact, this is why we removed previous_analysis as a New Code Period option.

Yeah, we agree this is not ideal.

But at this point, I’m not sure what to tell you. The idea is that you only release (and therefore change the version number) when the Quality Gate is passing. If you deliberately change the version string to thwart the Quality Gate mechanism… well. We can’t fix bad behavior.



This is how we work, for a library (nuget package) to be consumed by another app:

  • Dev makes code changes
  • Dev determines if the version should update as per SemVer rules and checks in the new version file alongside the code changes.
  • Code gets built and tested
  • If the tests pass then the package is published for consumption by downstream apps.

It isn’t necessarily that someone will be trying to subvert the mechanism - though of course that is a possibility - but if, for any reason, the version number is changed then this resets the reference point for “new code” and therefore it is possible, intentional or otherwise, to bypass the higher standards required by new code.

I haven’t been able to test this yet but if I am on branch 1.2.3 and fixing a bug so going to release 1.2.4, what happens because someone else is working on 2.0.0 and pushes this to master? First - is my code now automatically considered “old” and so it will pass even if it is bad?

It seems like a sensible option that “new” code is considered since the last good analysis - I mean, why would you want allow bad new code through for any reason?
This is one of the reasons the age of code option does not seem like a great measure - why should bad code suddenly be deemed acceptable just because it was written more than n days ago? It could be on a long-lived feature branch.

Actually related to that, if we set the age to say 365 days, do we need to keep 365 days of analyses or is there no relationship between the two? It is unlikely that a branch/PR would live that long so having a very long period would make it much less likely that code is considered “new” just because it has aged.

Hi @tbutler ,

You may want to revisit the use of branches again. Library projects can benefit from a Main/Trunk branch that’s separate from Dev just as much as applications. You already mention long-lived branches, so what’s one more?

You say that your code is built and tested and, if successful, the package is published. Why not develop in Dev (or feature branches), where you can build and test your code as much as you like, and - once it’s ready - merge the changes into the Main branch from where it’s then published? This would give you a Main branch to compare to, so that SonarQube can detect “new code” independent of the version number or date.

1 Like

Thanks for the suggestion @cba but we don’t want to overly complicate our development process for components which works well for us at the moment.

I would like to understand the Sonar view on why it seems like a “good” idea for poor quality code to be considered good quality simply because of the passage of time where you tick over the count of days for code to be considered “new”. Perhaps I am missing something…


The Sonar view is, admittedly, a bit Maven-centric/influenced. And it’s based on a workflow that’s opposite of what you’ve described:

  • Dev makes code changes
  • Code gets built and tested
  • If the tests pass - including the SonarQube Quality Gate - then the package is published, i.e., released
  • The version is incremented → the New Code Period is reset

And in this scenario, it’s a natural outcome that

Since you would only release on a “good analysis”, i.e., passing Quality Gate.

Ehm… first, if you’re going to release 1.2.4 after 2.0.0, then I would expect it to be released from a branch, rather than from main, which is presumably what 2.0.0 was released from. So automatically, the two would have no interaction.

And second, version strings are just that: strings. There’s no attempt to handle them as numbers and understand that 1.2.4 < 2.0.0. So if the string changes, that would reset the New Code Period. Your version strings could be “Bob”, “Harry”, and “Frank” and the results would be just the same.

You wouldn’t. That’s why you would only release - and subsequently change the version string - on a passing Quality Gate.

And I think it’s worth pointing out that with the default behavior of ignoring coverage and duplications when there are fewer than 20 lines changed means that a series of small changes could move your code base in the wrong direction w/r/t those two metrics. Imagine a series of 20 commits, each with 19 uncovered, duplicate lines. Barring the introduction of new issues, they would each pass the Quality Gate and - under your scenario - reset the New Code Period.


Hi Ann

Thanks for the explanation.
To be clear, we don’t publish if the build fails, but the version increment, in a GitVersion.yml file, is controlled by the developer.
We are using SemVer so the CI server can’t determine if it a 1st, 2nd, or 3rd digit increment is appropriate for the component.

The version in GitVersion.yml is then used to version stamp the assemblies - so the build builds the code with the version stamped in, the tests test the code, Sonar when we plug it into CI, will analyse the results, and then the version is published or not published depending on if it passed.
On a PR branch we will publish into a separate repository so that consuming applications can test the new components.

re: version numbers, since they are just strings I guess you are saying that the “new code” flag is reset when the version “number” is simply different from what it was before, not that it has “incremented” - you don’t have that concept.

A perhaps more “realistic” scenario with the problem I described - I am working on version 1.2.4 so I push a version as 1.2.4-beta1 and that has a bunch of quality issues (maybe not even breaking ones but a bit smelly).
I need to release another beta anyway with more changes/fixes in it, which I label as 1.2.4-beta2
Now all my quality issues from before have magically disappeared because the beta1 code is “old” - it is still all new with respect to version 1.2.3 however, which is the last known “good” version in master.

Thanks for pointing out about that default behavior issue - could result in death by a thousand cuts (of bad code). I’ll go change that in our setup :slight_smile:


We would expect those variations to be analyzed as individual branches or PRs.

Yes, exactly. In fact, even this resets: v=1.0, v=1.1 (reset), v=1.0 (reset).


It is all the same branch though - the 1.2.4 branch but it has had to have a revision applied to it.
1.2.4-beta1 would never be merged to master so changes would be made to the 1.2.4 branch, beta-2 issued and then, if that is good, it would be merged to master as 1.2.4