Ratcheting Quality Gate conditions: Fail Quality Gate if coverage decreases from last analysis

SonarQube version: 7.9.1 LTS
sonar.leak.period = previous_version

We want our Quality Gate to fail if coverage decreases during the leak period, regardless of whether it is still above our threshold of 70% (Example: If coverage from last analysis was 75%, quality gate should fail if coverage on next analysis is 74.9% or lower).

In older versions of SonarQube this could be accomplished by using the condition “Coverage, Delta since last analysis, less than, 0”.

Is there a way to do that with the “Coverage on New Code” condition, like setting it to be less than the coverage value from the previous analysis?

3 Likes

Hi,

This isn’t available, and as you’ve described it I’m not sure it would be productive. I understand your desire to not let people coast on previous excellent efforts, but as I understand it, if my colleague makes the first commit in the New Code Period and his change is 21 lines (to get past the minimum line count required to apply coverage criteria) and he’s in the unusual position of being able to provide 100% coverage with a reasonable effort. Well… the rest of the team is screwed. We have to have 100% on everything. Or… we start gaming the system.

Instead, I’d consider gradually raising the bar on Coverage on New Code maybe from version to version or month to month.

 
HTH,
Ann

please consider not having to tweak the rules to continually increase quality. instead, how much better it would be to have a rule that “coverage is not less than the previous” that would automatically either maintain or improve test coverage rates, without having to adjust the quality gate on a release-by-release basis.

1 Like

The ‘coverage on new code’ rule is flawed in a number of ways when used by itself. It encourages high coverage on new code where testing is of marginal benefit at best, and permits existing, critical code to continue uncovered - it gives no incentive to people to cover previously uncovered code in their PRs.

More importantly, it has a huge blindspot when it comes to catching tests being broken. Time after time we have had test suites be partially broken by a PR - meaning that the PR dropped the overall coverage by 30%+, but because the tests on the new code were working, SQ was ‘this is fine!’. Yes you could catch these with a static overall coverage gate, but that will need constantly manually adjusting as your baseline coverage increases, which you definitely shouldn’t have to do.

A ‘delta gate’ would be a much needed compliment to a ‘coverage on new code’ gate in many cases, and TBH, if I could only have one or the other, I’d take the delta gate.

Hi @stephen-hand-RTR,

Welcome to the community!

I don’t disagree that what I’ve been internally calling “ratcheting quality gate conditions” would be useful for a lot of folks. But I do need to disagree with this:

The best time to write the tests is just after you’ve written the code (or just before, if you’re doing TDD). That’s when you still remember all the intricacies of what the code is supposed to be doing and it sets you up for the next time you touch that or related code.

With this I agree, and it’s on purpose. Again, the best time to write the tests is in conjunction with writing the code. And the time to cover “existing critical code” is the next time you have to work on that code. And by working on it, you naturally convert it to new code, thus making it subject to Coverage on New Code requirements.

 
Ann

Dear Ann,

There is the coverage needed on the new codes. It focuses on a very narrow area and does not encourage developers to think a little bigger in scope. What I mean is that they will go along the lines of minimum energy investment, and if you move just one line of code in the legacy codebase, that’s all they will test for, because that’s what the quality gate expects.
Conversely, if you could set that the coverage should always be at least as good as it was before, that would encourage developers to remove unused code fragments, or to create tests for existing code fragments that have not changed anyway, but are easy to do.
And there are quality developers who are motivated to improve the quality of the entire code base, bit by bit. And then there are those who only put in the bare minimum. These two groups are sort of fighting each other. I prefer to support the first group.
In a large legacy codebase where the main focus is on developing new features, covering new lines of code sounds like a very good quality gateway, but for the reasons mentioned above, no one wants to touch the legacy code parts. Especially when the coverage of the legacy part is terribly low.

What is the reason why this feature has been rejected so strongly and for so long? After all, if I understand it correctly, it is the same goal. There would be another option along the same lines.

Hi @sarkiroka,

I agree with you.

“Reject” is too strong a word, IMO. “Ignored” is probably more fair. As for why, it’s largely because it’s not obvious that there’s broad support / need. Your feedback is helpful in this respect, and I’ll make sure to pass it on internally.

 
Ann

Hi @ganncamp ,

Can we turn this into a feature request? If so, I’d be happy to add my vote. This feature would help us gradually increase our code coverage as well.

Thanks!

Good idea @cba. I’ve moved this & updated the title slightly.

cc OP: @Stew_Cee

@ganncamp We would love to have the “ratcheting” quality gate feature on SC. Any good reasons why this has not been implemented yet? Thanks.

It’s a question of priorities & bandwidth @vicsonvictor