Do you Clean as You Code?

At SonarSource, we believe strongly in the power of Cleaning as You Code. I.e. that it’s enough to make sure all today’s changes are clean (covered by tests, issue-free &etc) without worrying about legacy problems. That’s why the built-in Quality Gate focuses solely on New Code.

What about in your company? Team? Personally? Do you adhere to a policy of making sure all new and changed code is clean? Or do production pressures mean you have to let that slide sometimes? (Often?)

1 Like

When we introduce changes in a class/method, we try to clean the existing code around, especially when the scope of regression testing won’t be increased.
According to the boyscout principle Always leave the campground cleaner than you found it.


Hi @lrozenblyum

I love this! Was it a hard sell with your team/management? Or did it seem to come naturally? Do y’all feel like you are seeing a benefit over time in the overall code base?


1 Like

Hi @ganncamp.
Being a tech. lead I promoted this idea of continuous code improvement, and it works rather smoothly.
SonarCloud is helping managing this (PRs are blocked if while editing the legacy code some code smells are left in it).

Definitely the codebase improves over time (it’s a long-term project) and if we look back and compare what was a year ago and now - the maintainability has been improved drastically.

Our main constraint is: try not to execute refactorings that would increase the scope of regression testing for the QA department (not a strict rule, but important to take into account during the periods when the pressure on the team is big e.g. closer to the expected release date).

1 Like

My impression is that cleaning as you code works well for simple stuff, that can very easily be resolved when you’re changing those lines anyway. However some rules are harder to resolve with the clean as you code mindset. I’m thinking about something like cyclomatic complexity and naming conventions. Some rules of that kind are less suited for clearning as you code in a legacy code base that even had a different naming convention than we adopted as company standards.

Hi @torgeir.skogen,

I’m curious whether you’re working in a language / IDE that has the rename–the-method-throughout-the-codebase functionality? Or is this more about the difficulty of renaming a public API? (For which choosing to “Resolve as Won’t Fix” is probably an entirely valid way of “cleaning”…)

Similarly for complexity, couldn’t you just (have the IDE) extract a subset of code to a helper method? I’m asking this partly because that’s been part of my standard refactoring advice for high Cognitive Complexity.


I do have access to these things. It’s just that in some of the projects I work on, these topics haven’t been a focus. When I’m submitting a merge request to fix some isolated bug, it’s not hugely relevant to pull in changes to reduce reduce cognitive complexity from say 72 to below the threshold of 25, or something similar. And when the project uses snake_case instead of the company standard camelCase then it feels wrong to start adopting the new naming convention on a change by change basis. It would be much more relevant to do big sweeping changes.

As far as I can tell, resolving these issues as “Won’t Fix” will remove these code smells from the report once the branch gets merged into the main branch. That’s not what I want.


Yeah, that makes total sense. In retrospect, I don’t know what I was thinking to even ask.

And BTW, for the project that uses snake_case you might consider using a inherited Quality Profile with a different value for the naming convention rule’s regex.


I actually have to refrain myself from cleaning as I code. I do believe in refactoring often but it is usually much safer to separate cleaning from the introduction of new behavior.

I ideally the workflow would be:

  • identify code to be cleaned.
  • add tests that pass with the existing code.
  • refactor the original code, without changing the expected results in the tests.
  • merge the refactored code
    Only then I would look into fixing the bug or adding the new behavior.

This ensure that if there is a regression we can more easily ensure if it came from the refactoring or from the inteded behavior change.

This is easier on the reviewers since refactoring can span a lot of lines and it is easier for them to confirm that the big refactoring is not introducing behavioral change than to confirm new behavior might not have unexpected consequences.

The idea of clean-as-you-code is great, but the implementation in SonarQube actually discourages this approach!

There are two main problems that I see

  1. If you change a line of code at all, you are blamed for all issues on that line of code.
    Typically this might be a method signature - you could change a parameter which is accepting a concrete List to an IList for example with ease, but then you get blamed for two other issues with the method signature which are a) unrelated to your changes and b) much harder to resolve - lets say one of them is a “too many parameters” issue in some shared code - changing the method signature to have fewer parameters and changing all callers of this method across all projects that use it could be very risky and take significant time when all you really wanted to do was change a label from red to green…but changing the List => IList was a trivial change along the way you could clean as you coded.

  2. If you touch a method that has method level violations - e.g. cyclomatic complexity - you are then blamed for that complexity. Lets say the threshold is 10 and the code is currently at 15. Your change either does not change the complexity at all or it reduces it to 14 - still over the threshold but better than it was.
    You are now blamed for the complexity of the method and again, refactoring it may be far to big an issue to tackle (particularly in a mature codebase) so you are reluctant to make that change

A number of developers have expressed concerns about this and are now less inclined to “clean as they code” than before we introduced Sonar!

In theory perhaps you “should” fix all the issues because you are touching the code, but more practically, in the commercial world, with pressures on developers to deliver on features/stories assigned to them, they may not be able to justify spending this time to fix something that “ain’t broke”

I think if SonarQube did not attach blame to developers where the change they make does not make the codebase any worse (or improves it but does not fix everything) then our developers would be more inclined to “clean as they code”.

1 Like

@sodul and @tbutler, y’all make great points that play well off each other.

@tbutler you mention method-level issues. Are you saying that if you touch the method signature, old issues are reassigned? I’d have expected them to be unchanged.

@ganncamp I would expect that too, but it is not the case :slight_smile:
I have raised a support issue discussing this in the past, about 6 months ago - there is a long discussion in SUPPORT-33763 where Antoine confirms an issue but does not provide a timeline to resolve. I have not seen any updates to suggest the issue is fixed.

1 Like

Appreciate the detailed feedback on Clean as You Code and expectations.

To add on @tbutler, the quality gates are a problem because when refactoring the issues (lack of code coverage, complexity) prevents folks from merging. So their only way to merge is to mark genuine issues as false positives. We ended up having a pipeline to ‘reset’ false positives claims on code coverage.

We want Sonar to be an incentive to improve code, and that’s where the quality gates blocking a Merge are great because it does force our developers to review the issues. If there were a way in Sonar to Acknowledge an issue but ‘Ignore for this PR’ so the merge can move forward, this would avoid the abuse of marking things as False Positive.