Running on GitHub actions, Pull Request decoration enabled
Background
We have recently configured SonarQube for one of our existing projects. We have enabled Pull Request decoration for GitHub Pull Requests (with checks and tests running in GitHub Actions workflows). The Pull Request decoration is working fine. Specifically the decorator application/bot will add a report comment for the PR but will not actually cause the build to fail (via either a failed GHA step or via a failed check in the PR) if the Quality Gate does not pass.
The Issue
The pull request application/bot adds a failing check to the commit on the master branch in GitHub when the quality gate fails. This results in a red âXâ next to the commit in the commit history, which has caused some confusion (panic) among our developers (especially when the Pull Request is all-green and seemed fine). Is there a way to not have the failing check added?
Additional
I should note that even when Pull Requests fail the Quality Gate it doesnât add a failing check (which is the desired behavior in our case).
Additionally, the failing check doesnât trigger any alerts or break any workflows. As far as I can tell the only way to find the failed check is to view the commit history at github.com///commits/master.
The thing that keeps triggering the failure is the New Code metrics (specifically new code coverage) for the last 30 days of commits. My understanding is that that threshold is used for both the PR status (which even if itâs not strictly enforced we do still want) and the master branch, so changing the thresholds for master would affect the PR threshold as well. So I donât think I can just configure-away the Quality Gate until the master branch passes, right?
Can you share your full pipeline? From what Iâm reading, it looks like you have analysis enabled for both push and pull_request, with the Quality Gate status enabled only for push, and thatâs what you want to removeâŚ?
Yes, the same Quality Gate is applied to both PRs and branches, although conditions on overall code arenât applied to PRs.
Hi Ann, I donât think I can share the full pipeline, but you are correct that we have an analysis that runs on both push and on pull_request. The Quality Gate status is apparently enabled for push, but we donât want that enabled right now. Is that something that can be enabled/disabled, or is it only ever enabled? To be clear, we want the analysis to run and be available via our SonarQube server, but donât want SonarQube to add the failing checks to GitHub.
Thanks,
Ben
Hi Ann,
Is âremove or comment out the checkâ the same as running or not running the scan via sonar-scanner? Or is there something like a -Dsonar.checks.qualityGate=False option that can be set/unset? Because if we donât run the scan on push to master, then we would also lose the analysis reports for the master branch on the SonarQube web server, right?
Thanks,
Ben
Hi Ann,
What do you mean by âpipelineâ? Do you mean the GitHub Actions pipeline or is there another pipeline somewhere that Iâm unaware of?
Our GitHub Actions pipeline has a final step of ârun-sonarqube-analysisâ, which executes the sonar-scanner ... command with a number of different arguments, none of which appear to be related to the Quality Gate. And as far as I can tell weâre not making any explicit calls to a Quality Gate status step.
I donât know if itâs relevant, but the failure doesnât show up as part of our normal GitHub Actions workflow/pipeline. It gets added to the pushed commit build list separately as a step called SonarQube Code Analysis which runs under a workflow(*) which shares a name with the decorator application/bot. So I have no idea where this step/job/workflow is coming from. As far as I can tell it isnât running on Github, so does it get run on the SonarQube server and then added as an external step via some method Iâm not familiar with?
(*) I call it a âworkflowâ because itâs listed in the same pane with our specified workflow (the one which runs our tests, the SonarQube analysis, etc), but Iâm not sure if thatâs how itâs being run or if thatâs what it is. Our specified workflow includes an on: push subtitle, while the workflow named after our decorator application/bot does not have any subtitle at all.
If the call was happening in GitHub, wouldnât that cause a failure in the workflow/pipeline in which the analysis was called? Our âfailureâ is a failed check which happens outside of the workflow that calls the analysis.
Relating to above and your previous comment, I greped for âsonarqube-quality-gate-checkâ in our repository and it returns no matches. This leads me to believe that this is getting called either by the SonarQube web instance or by the sonar-scanner under the hood.
The failing check happens under a âworkflowâ called âSonarQube Pull Request Decoratorâ. This is the name of our decoration application/bot. The only âstepâ in this âworkflowâ is called âSonarQube Code Analysisâ. Neither of these strings are found by grep either.
The âSonarQube Code Analysisâ step (the one with the failing check) does not appear anywhere under the âAll workflowsâ sections at github.com///actions. So I again canât find where this workflow is defined/called/run.
In SonarQube web instance Compute Engine logs, I see the following:
... INFO ce[][o.s.c.t.CeWorkerImpl] Execute task | project=... | type=REPORT | branch=master | branchType=BRANCH | ...
...
... INFO ce[...][o.s.c.t.p.a.p.PostProjectAnalysisTasksExecutor] Pull Request decoration | status=SUCCESS | time=0ms
... INFO ce[...][o.s.c.t.p.a.p.PostProjectAnalysisTasksExecutor] Report branch Quality Gate status to devops platforms | status=SUCCESS | time=2441ms
... INFO ce[...][o.s.c.t.CeWorkerImpl] Executed task | project=... | type=REPORT | branch=master | branchType=BRANCH | ... | status=SUCCESS | time=31330ms
I notice that for Pull Request branches the âPull Request decorationâ step has non-zero time and the âReport branch Quality Gate status to devops platformâ step has a time of 0ms. For the runs on the master branch thatâs reversed and the devops report time is non-zero. I donât know if this is relevant to failures/decorations weâre seeing.
Hi Ann,
This is an edited screenshot of what Iâm seeing.
As you can see, there are 3 workflows showing up for this commit. The first two (names redacted in Red) are expected workflows with workflows defined in the .github/workflows folder. The second one calls the ârun-sonarqube-analysisâ step, which as previously discussed runs the sonar analysis (but does not do a sonarqube-quality-gate-action).
The third workflow, titled âSonarQube Pull Request Decoratorâ does not have a definition in .github/workflows. It also does not have a workflow history anywhere in the âActionsâ tab. The only way we see this is going to the commit history and following the red X.
The red X only appears on commits which trigger the second workflow (the one with the run-sonarqube-analysis step), so they are clearly related, but itâs not something we can control.
So what youâre objecting to is the Quality Gate decoration, which is a core feature of the integration.
And the reason you object is that it panics the developers and that you donât strictly enforce the conditions that are failing, even if you do want them checked.
Based on that understanding, Iâm pretty confident in saying weâre probably not going to change anything related to this in the direction you would want it to go.
Iâm going to nonetheless refer it to the Product Managers.
Hi Ann,
That seems largely correct, with two follow-ups:
I guess part of our objection would be that the commit gets marked as broken even when everything for the commit itself works fine. It seems like an odd inversion in terms of ordering a rollout. For example, since weâre introducing/rolling out SonarQube it seems like the normal order of operations would be to limit things to per-commit/PR first (e.g. start with PR decoration, then move to PR decoration with quality gate checks to enforce each PR), then move to whole-branch new code code analysis + enforcement, then move to overall code enforcement. Jumping straight to failing the build without first failing the PR seems counterintuitive.
Ah, but according to your own standards (the Quality Gate) itâs not fine. Thatâs why we mark it broken.
Yeah, this is a fair point, and to me itâs the real question. Why didnât the PR fail its Quality Gate. Causes that I can think of off-hand
coverage isnât being reported at all on PRs (IIRC, if you fail to pass in any coverage reports, we figure you just donât care & donât sound the alarm about ânull < 80% Coverage on New Codeâ
itâs not a failure in a single PR, but a cumulative one. The math on that is a little head-scratching, but just waitâŚ
by default, we donât enforce coverage and duplications conditions on new code when fewer than 20 lines are changed. So the PR limbo-ed under the wire, but the branch was too big. Add up multiple small PRs and you can end up with a big hit to the target branch. BTW, thereâs a global setting to turn this off for the instance.
⌠you mean the Quality Gate decoration? Yeah. We donât just document it, we brag about it:
Hi Ann,
Firstly, it sounds like this is desired behavior and thereâs no way to disable it and turn it off (without disabling the application which also does pull request decoration), and there wonât be anytime in the foreseeable future. (And thereâs no way to have a Quality Gate for PRs and a separate one for the branch analysis, correct?) In that case, I think youâve answered the initial question/issue, and we can proceed on our end. Thank you for your help and patience on this topic.
I do however want to add to the ongoing discussion here, just for whatever additional context it might provide.
On the topic of marking the build as broken, I think itâs important to note that not everyone would define âbrokenâ that way. My company and the project has been around for 10+ years. For all that time, a red X on a commit has meant âthere is an issue with the current commit that requires immediate attentionâ. Now maybe thatâs a meaning that should change internally, but:
1) I didnât know that SonarQube would add the failing check. I had seen the section of the documentation that you highlighted, but incorrectly assumed that was referring to the sonarqube-quality-gate-action. I still canât find anything specific that describes the failing check behavior.
2) Because I didnât realize SonarQube would add a failing check to the master branch, we did not have a discussion about the meaning of the failing check.
This is all relevant because Iâm pushing for SonarQube adoption, but I donât have absolute authority within my organization. Itâs a lot easier to get others on-board (both for more integration as well as adoption into more projects) if SonarQube is âthat tool that helps me with ______â and not âthat thing that always marks the build as brokenâ. Additionally, the people most likely to be entrenched and resistant to new tooling are the ones who have been on a project the longest - the most senior and influential. So Iâm trying to ease people into it, focusing on Pull Requests (individual level) rather than whole-branch issues (cross-team level) that are out of scope. That way people build familiarity and positive experiences, and I can leverage the positive sentiment into more forceful integrations.
Anyways, thanks again for your help clarifying the issue, and Iâll mark this topic as resolved.
Ben
Unfortunately, thereâs no way to decouple the behavior.
I appreciate your delicate position, having been the SonarQube pioneer and advocate in my previous company, and I would love to help you find a way to make this work. Since the sticking point is the failure on the main branch, can you create a new thread where we can explore that? I still feel thatâs the crux.