But… why? When I run an PR analysis, I don’t care about issues in files that I didn’t touch. I only care about the issues that are actually in files changed in that PR.
We do not currently support incremental analysis as it can result in less accurate scanning results. This has come up quite regularly, especially in the context of branches and pull requests, and internally there is a lot of discussion around this, but I don’t think we have anything to share at this time.
But sometimes I do see incremental results about only the changes in my branch, and sometimes I get the full set of errors across the entire repo. Do you happen to know why that would be? Having this weirdly inconsistent behavior seems worse than either option.
I couldn’t agree more with Melissa. I hope you decide to introduce incremental analysis.
As a new (and willing to pay) user of SonarCloud, I have to onboard my existing code repositories. Some of these repositories host vast code bases and some are legacy. If I were to implement SonarCloud today, it would report on the enormous amount of bugs, vulnerabilities, and code smells per PR. In fact, there would be so many of these historical issues, that it would be impossible to identify any new issues. This effectively renders SonarCloud useless and provides no benefit to my organization.
Is there a recommended way to solve this? Going back and fixing historical issues is simply not an option.
To my mind, your post really presents two different issues:
It takes a long time to analyze your entire code base
It’s not feasible to fix old issues, you only want to see the new ones
For right now, I can’t help you with #1, but for #2, that’s exactly why we introduced the New Code period / Leak Metaphor, and the Clean as You Code methodology. Basically, the idea here is that SonarQube helps you focus on the changes in new code to make sure that the new changes going into production are clean.