Refactoring touches 500+ legacy files with 50% coverage
SonarCloud requires 80% on changed lines
Writing tests for all touched code = months of work
Context:
Master has no “Previous version”. We use fixed baseline to enforce Clean as You Code. After 1 year, “New Code” includes most of the codebase when touched.
Core issue:
Clean as You Code says “bring old code up to standard when touched” - correct for incremental changes, impossible for systematic improvements (migrations, linters, security updates).
Question:
How do other organizations handle large-scale refactoring under strict Quality Gates? Is there a recommended approach or exception mechanism we’re missing? Any recommendation from Sonar?
Our SOP recommendation works - as you noted - for normal development. When you’re doing a massive refactoring, let’s just posit the renaming of a widely-used method, then it stops working.
In this case you would just have to do the manual override.
I think we may need to talk about this part though. The underlying assumption on our part was that what’s “new” would be reset with some regularity, presumably with every production release. This was formulated in the pre-cloud, pre-micro-release days, so the assumption was every couple weeks to every few months. But not from SonarQube setup forward.
We don’t reset the master baseline regularly because master is our continuous development branch. We branch off release branches from master, each with their own baseline at the branch point.
However, I’m not sure how resetting the master baseline would solve our issue. A baseline reset would accept all issues currently existing on “New Code” and move them to “Overall Code”. This would hide quality problems rather than address them, which contradicts the clean code approach. We built a large codebase over the years and started with SonarCloud last year with a zero-bugs policy to achieve clean code incrementally by ensuring master only receives new code that passes our Quality Gates.
However, refactoring is needed from time to time like after a Java update to get new syntax and new APIs we want to use. This leads to bigger code changes within legacy code. That’s the problem we’re facing: The refactoring touches 500+ legacy files with less coverage (from before our quality standards have been set). That is blocking us
Yes, and this is the problem people face when they mistakenly use a combination of “previous version” as the New Code setting with their build number as the sonar.projectVersion. (Every analysis resets New Code.)
However what are you achieving with a multi-year “new” code setting?
This is admirable and exactly what we suggest (still admirable to see it in practice since not everyone follows our suggestions.) And that means that either you currently have 0 issues, or that you have months-old “new” issues that - let’s face it - you’ve effectively accepted if they haven’t been fixed by now.
So now, faced with these refactoring changes that will bomb your coverage, it seems like a good time to wipe the slate clean by resetting your new code definition (after the refactoring, not before. )
If there are particular issues that you still want to prioritize, you might tag them so they’re easily findable, or use the Jira integration to push them directly into your development backlog.
That… will never change? Not a good idea either. You really need to reach an equilibrium of “if we haven’t fixed it by X, then it’s clearly not important in the short term.”
Well the name stays, but the baseline must be changed after a refactoring from what I’ve learned here. When you say “Not a good idea either” what exactly do you mean? What is the problem with using branch name that never change?
And by the way we have 0 open issues - our “issue” quality gates are green. The problem is solely about coverage
Well, it’s that you do eventually want a reset. Again, the idea here is to keep new code clean. But code that’s even 6 months old isn’t really “new” anymore. If it’s gone this long without being fixed, then the defect - whether it’s issues or missing coverage - has effectively been accepted. So keep SonarQube in line with the on-the-ground reality by resetting every once in a while. How often do you release to production? Production release is a good natural break here.
And what does your coverage on new code look like without this refactoring? Was it good enough?
Thx Ann for your support, this will definitely help us. But we’re maintaining our strict approach and will only reset the New Code Period when technically necessary, such as during major refactoring operations.
Yes absolutely we have more than 80% on new code. Q Gates only allow covered code with min. 80%. Having all “green” on new code will be prerequisite for a reset so we can remain with our strict strategy.