How to measure effort fixing legacy issues

let’s say we have a legacy project.

when we first scan it, it has tons of issues.

the sonar’s default quality gate is focus on new code, so when next time we scan it, it may pass quality gate.

I want developers to fix issues of overall code slowly, is there any metric can help ?

for example: overall bugs should decrease at least 2%

Hello @West_Farmer ,

It’s important to keep in mind that “New Code” includes not only added code, but modified code, so any changes will be subject to the higher standards in your New Code-focused Quality Gate.

There is currently no way to set a Quality Gate condition that requires a decrease in issues from the previous analysis. I suggest you track metrics like overall Issues and/or Technical Debt to monitor this gradual improvement. This section of our documentation will explain these.



From my experience, modifying legacy code only to satisfy some arbitrary metric is not worth it. After all, you are always running the risk of introducing new regressions during refactoring - and high code coverage alone does not give you an actual guarantee that the automated tests will find every regression in business logic. We’ve already had cases in the past where developers actually broke existing code in an attempt to placate SonarQube.

Instead of focusing on some arbitrary metric like “reduce all bugs by x%”, it may be an option to instead deliberately review issues and pick a concrete type of bug to fix throughout a code base (or only within a particular module, if the code base is too large). Then it’s easier for reviews to verify that only code related to that particular rule was changed and that the possible side-effects are well understood.

1 Like