I have a question about scanning Pull Requests and generating downloadable reports.
Our current situation:
Our Quality Gates run during Pull Requests, so scans are performed on PRs, not on the master branch.
The “Overall Code” view in SonarQube reflects the master branch and changes over time, so it does not point to a fixed scan/report for a given PR or release.
We do not scan master regularly, so “Overall Code” is not useful.
We need a way to produce a fixed, per-version report for each new code version (ideally per PR or per merge commit) that can be downloaded as an artifact and referenced later.
It should still analyze pull request and report issues in pull request before it’s merged.
Questions:
Is it possible to run a full SonarQube analysis for a Pull Request that produces a complete report/artifact (e.g. HTML/PDF or other downloadable format) tied to that PR or a specific commit/version?
If SonarQube cannot produce such fixed downloadable reports per PR natively, what is the recommended integration pattern to achieve this requirement? Are there any best practices for keeping a stable reference to a specific scan/report for later audits?
Environment details (if relevant):
SonarQube Server v7.4.2
PR-based Quality Gates with Azure DevOps Integration
Goal: stable, downloadable scan reports for whole code in PR
This simply isn’t available for PRs, only for branches.
Why? It’s best practice to re-analyze the branch after every merge. Unfortunately, there are issues that simply can’t be picked up in PR analysis. You should really consider adjusting your practices.
The closest you could come here would be an analysis of the PR’s underlying branch. But again, what you should be analyzing is the target branch after merge.
Seriously?
7.4 was released in October 2018, and it’s not even an LTA. You are missing so much functionality, so many upgrades, bug fixes and security patches.
Thank you for the quick response and for the clarification.
Let me clarify our core requirement, as I may not have explained it well…
Our process is designed to have a strict Quality Gate within the PR pipeline. However, it may happen that some analyzer gets updated or some rules gets changed in SonarQube and new issues can suddenly appear in old, unchanged code.
Your suggestion to analyze master after the merge would catch these issues, but for our workflow, that is too late. The PR would have already been approved, and potentially problematic code would be on the main branch. Our goal is to ensure the entire codebase passes the rulesets before the merge occurs, and to empower developers to fix these issues within the PR itself.
You mentioned that downloadable reports are only for branches and that some issues can’t be detected in a PR analysis. With that in mind, could you advise on the best practice for our specific scenario?
Triggering a Branch Analysis from a PR: Is there a recommended way to configure our PR pipeline to trigger a full branch analysis on the PR’s source branch? This would give us the complete analysis and the downloadable report you mentioned.
Understanding PR Limitations: Could you give an example of an issue that cannot be detected in a PR analysis? Understanding the specific limitations would help us assess our risks.
Essentially, we want our PR analysis to be the single source of truth for quality, creating a versioned report we can archive. Re-running a full scan after the merge seems redundant for our use case.
I’m glad to hear you’re not running a 6-year-old version.
I’m not an expert in pipeline syntax. Probably the easiest thing is just to analyze every branch and not allow PR creation until you have clean quality gate. Just make sure your housekeeping settings don’t keep too many branches too long to avoid DB bloat.
PR analysis only reports issues raised on changed lines. A trivial example would be my deletion - in a PR - of the only use of a variable. The variable declaration itself is untouched, so no issue is raised on the declaration. A more significant example would be to consider changing a method so that it can now return null. All return values from the method should now be null-checked, but the invocations weren’t edited in the PR, so those issues aren’t raised.
What actually gets deployed into production? The PR? Or post-merge main? IMO, what you need a record of is what gets deployed, not the ingredients that go into that.