Must-share information (formatted with Markdown):
- which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension): SonarQube 7.5.5
- how is SonarQube deployed: zip, Docker, Helm: Unsure
- what are you trying to achieve: PR Code Coverage Diff reports with intelligent test selection
- what have you tried so far to achieve this: Unsure how to proceed
Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!
I’m trying to set up Code Coverage Diff reports for PRs (with PR decorations). The catch is that I’d like to use intelligent test selection. That is, when you open a PR and it kicks off a suite of pre-merge CI tests, it only runs the tests which are relevant to the PR changes.
This creates an issue for generating Code Coverage Diff reports, because in order to compare apples-to-apples, you’d need the baseline Code Coverage report (from main
) to run the same suite of tests as the PR branch, and it’s impossible to know which tests will be needed until the PR branch is pushed to the repo.
I am able to do this. When I open a new PR, I can figure out which tests need to be run for that PR, and I can run those tests against the main
branch to get baseline code coverage metrics to compare against. So theoretically it should be possible to generate a Code Coverage Diff report even for that tailored test selection.
My question is “How does SonarQube handle such a thing?”
For example, suppose 5 different engineers open up 5 different PRs around the same time, each building off of the same latest commit from main
. Each of the 5 PRs change different areas of the code base, so each PR requires an entirely different test suite to validate. I can run duplicate test suites for all 5 against main
branch, in order to get the baseline Code Coverage data to compare against. But how can I let SonarQube know which baseline data corresponds to which PR?
Is this functionality supported? Is there a viable workaround? Or does the Code Coverage Diff report feature really require the same suite of tests to be run for all PRs (and therefore isn’t able to accommodate intelligent test selection)?