Best practice on initial and subsequent scans for legacy code bases

We’re new to using SonarQube and trying to wrap our heads around what is the best practice/recommendation when setting up a large legacy code base for scanning. As the code goes back several years, we’re like to limit the scans so as not to be penalized in the reports for inherited code, how standards have changed over the years, etc. so the thought is we’d leverage the date limit capability within the scans.

We also use a pull request-based workflow where all changes are made in temporary hotfix or feature branches, and then merged via pull request into a permanent branch (develop, master, etc). We have the integration setup so that pull requests are being scanned and decorated in our SDLC system as expected.

What we’re a little fuzzy on, is given this background, are these the proper steps/order we should follow to get everything going?

  1. Full initial scan on all permanent branches with the desired date limit?
  2. Pull request-based scans to detect new issues created
  3. Periodic re-scans of the permanent branches (if so, how often?), or possibly setup commit-based triggers to scan the permanent branches when the pull requests are completed and the changes actually merged in?

Does that approach make sense or are there things we’re not considering? Any recommendations would be appreciated.

Hi,

Welcome to the community!

In fact we formulate the Clean as You Code methodology for just your situation and crafted the SonarQube UX to facilitate it. This blog post should help.

I’ve done a little editing of your steps. See what you think:

For branches you can analyze everything all the time because you’re only paying attention to changes on New Code.

 
Hopefully this makes sense,
Ann