I’d like a list of pros vs. cons related to the setting of a lower Quality Gate Coverage from the recommended 80 down to 40 (which is what DEV’s set it to). I believe the only “pro” is the code looks better than it is. With a lower setting you do introduce cons like missing vulns/bugs (which would not reflect kindly on DEV)… I need a “second set of eyes” to portray the increase in risk a lesser coverage presents. Input are welcomed - thanks.
Addendum - I am asking as a security tech, not as a DEV.
First, it’s important to distinguish between coverage, which relates to automated tests, e.g. unit tests, and issues (vulns/bugs).
For coverage, we don’t advocate setting a high threshold for overall values. In fact, the build-in quality gate doesn’t include a criterion on overall coverage at all. Why? Because in a legacy code base, that can be impossible to meet. Instead, you should set reasonably high standards for coverage of new code. That means the code you’re writing today should have tests. That’s an imminently reasonable requirement.
For issues, it’s a similar story. Set reasonably high standards for the introduction of new issues going forward. But don’t set high standards for the overall codebase. Otherwise, you’ll never be able to pass the Quality Gate & release again.
Thanks Ann - a quick question… is SQ’s ONLY responsibility to scan NEW code & ignore anything that was built prior to it being turned on? I’d find that incredulous.