How do organisations typically decide next course of action once projects are on Sonar?

We are at a stage in our organisation where we will be setting up all critical projects (repos) with Sonarcloud for PR analysis as well as baseline analysis.

All our repos are existing codebases with code commits happening since 5-10 years. This means that there are coverage and quality issues in the baseline (code in the main branch).

As an initiative, we are setting up guidelines and recommendations for product teams to follow to improve the current state of their codebases. On a high level these are the guidelines:

Over the period of next 1 year all teams should aim to improve their baseline (code on main branch) code coverage and quality incrementally to an acceptable state.

  1. Services which have > 0 security vulnerabilities & hotspots
    Review all the vulnerabilities and remove the false positives. There are high chances that some of these vulnerabilities might be false positive. So it is a good chance to mark them as false positives. After removing the false positives, you can plan to fix the legitimate vulnerabilities.
    Acceptable state: 0 security vulnerabilities and hotspots.

  2. Services which have code coverage <= 50%
    Identify if these services have files that are not supposed to be tracked under code coverage but are getting tracked. Update the exclusion criteria to get more accurate code coverage data.
    Identify if these services have critical paths or business logic that are uncovered with tests. If yes, plan to improve the functional coverage. Use this low code coverage as a foundation to improve the functional coverage. This will consequently increase the code coverage as well.
    Acceptable state: Although there is no “ideal code coverage number,” we would like to offer the general guidelines of 50% as “acceptable”, 65% as “commendable” and 80% as “exemplary.”

  3. Services which have reliability rating C or D
    Focus on prioritising the blocker and critical issues (if there are any). If they are false positives, please mark them as false positives. If they are legitimate reliability issues, please plan to fix the bugs as part of the improvement plan.
    Acceptable state: 0 critical/blocker bugs.

My questions

  1. Does this sound reasonable to you? Any suggestions?
  2. I’d like to know from Sonarsource folks as well as the community here how other organisations go about improving the state of coverage and quality? Do you look at existing issues and try to fix them, or only focus on preventing issues via PR analysis?
  3. In general I feel that setting up integration with code quality tools is the easiest part. What is complex is getting people to adopt it and religiously follow it. How do you get around fixing the culture problem? Do you mandate adoption from the top via leadership?

Hi,

From a SonarSource perspective, what we advise is focusing on the quality of New Code (added or edited). We call it the Clean as You Code methodology. The idea is that by making sure the code you commit today is clean, you will gradually, naturally and automatically improve the quality of the overall code base.

The Git of Theseus helps visualize this. It shows how much of the code added in a given year, remains today. I’ve lifted and annotated a chart from that blog post:

So imagine that the project shown in the graph had started enforcing Clean as You Code where my arrow is, at the beginning of 2001. Only the yellow and purple code from the early years would be “dirty”. That means that by the end of the graph, roughly 2017, only… a tenth? of the code would have: bugs, vulnerabilities, poor coverage, high duplications, &etc.

Now specifically to your proposed acceptable state:

This is a good plan if you can devote the bandwidth to it. Be aware that for large / older projects, the volume here may be high. I would focus on Blockers to start with.

Yes. Absolutely!

Rather than pursuing this proactively, I would recognize that these critical paths and business logic are the parts of code most likely to be worked on over and over again. In other words, just by following Clean as You Code, and requiring a high level of coverage on all New Code, these parts of the code will have increased coverage without explicitly devoting resources to “Cleaning the Code.” Business as usual will get the job done.

Explicitly, we recommend a minimum of 80% coverage on all New Code, with no recommendation for overall coverage. It will follow naturally from covering New Code.

This sounds reasonable if you have the resources to devote (many don’t). At the same time, if there are bugs in old code, that lies outside the critical paths and business logic, then users likely aren’t being impacted by them. So if you’re budgeting time, maybe you don’t need to spend the budget here. You can just let Clean as You Code handle the existing bugs in the critical paths and business logic as those parts of code get worked on in the normal course of satisfying business requests.


Woo. This is a big one.

And the biggest part of this is support from the top. In my experience, good developers want to write good code. Give them the tools to understand what needs fixing and the time to fix it, and they will happily to do the best job possible. The stumbling blocks I’ve seen personally are

  • “The business doesn’t care if there are tests just ship the feature”
  • “We don’t have time for you to ‘fix’ that. The business doesn’t think it’s broken and we have a deadline to meet”
  • and so on

So having top-down support, from the beginning, for “We will not ship code that doesn’t meet the standards” is crucial.

That said, a plan of “over the next year, find time to clean this up” is going to be hard on everyone. Where do you find the time? And which features / releases do you delay in order to make that time?

If, instead, you focus purely on the quality of New Code, everything gets simpler. The default Quality Gate is all - and only - about the quality of New Code. So if you - and management - say: we don’t ship until and unless the New Code is clean, i.e.

  • Coverage on New Code >=80%
  • Duplicated lines in New Code < 3%
  • Maintainability Rating of New Code = A
  • Reliability Rating of New Code = A
  • Security Rating of New Code = A
  • Security Hotspots Reviewed = 100%

Then

  • you will know you’re not introducing any new problems into the codebase
  • you will naturally clean up the overall code over time in the course of “business as usual”
  • whether or not the Quality Gate is green / passing becomes the only criteria for whether or not you’re making progress toward your goals, and it’s clear, simple and easy to enforce mechanically

And… okay. I’ll get off my soapbox now.

 
:smile:
Ann

1 Like