SonarQube GitHub decoration is unexpectedly adding a failing check to master commits

Must-share information:

  • SonarQube version: 9.9, Developer Edition
  • SonarScanner version: sonarsource/sonar-scanner-cli image
  • Running on GitHub actions, Pull Request decoration enabled

Background
We have recently configured SonarQube for one of our existing projects. We have enabled Pull Request decoration for GitHub Pull Requests (with checks and tests running in GitHub Actions workflows). The Pull Request decoration is working fine. Specifically the decorator application/bot will add a report comment for the PR but will not actually cause the build to fail (via either a failed GHA step or via a failed check in the PR) if the Quality Gate does not pass.

The Issue
The pull request application/bot adds a failing check to the commit on the master branch in GitHub when the quality gate fails. This results in a red “X” next to the commit in the commit history, which has caused some confusion (panic) among our developers (especially when the Pull Request is all-green and seemed fine). Is there a way to not have the failing check added?

Additional
I should note that even when Pull Requests fail the Quality Gate it doesn’t add a failing check (which is the desired behavior in our case).
Additionally, the failing check doesn’t trigger any alerts or break any workflows. As far as I can tell the only way to find the failed check is to view the commit history at github.com///commits/master.

The thing that keeps triggering the failure is the New Code metrics (specifically new code coverage) for the last 30 days of commits. My understanding is that that threshold is used for both the PR status (which even if it’s not strictly enforced we do still want) and the master branch, so changing the thresholds for master would affect the PR threshold as well. So I don’t think I can just configure-away the Quality Gate until the master branch passes, right?

Hi,

Can you share your full pipeline? From what I’m reading, it looks like you have analysis enabled for both push and pull_request, with the Quality Gate status enabled only for push, and that’s what you want to remove…?

Yes, the same Quality Gate is applied to both PRs and branches, although conditions on overall code aren’t applied to PRs.

 
HTH,
Ann

Hi Ann, I don’t think I can share the full pipeline, but you are correct that we have an analysis that runs on both push and on pull_request. The Quality Gate status is apparently enabled for push, but we don’t want that enabled right now. Is that something that can be enabled/disabled, or is it only ever enabled? To be clear, we want the analysis to run and be available via our SonarQube server, but don’t want SonarQube to add the failing checks to GitHub.
Thanks,
Ben

Hi Ben,

It’s simply a matter of editing your pipeline to remove or comment out the check.

 
HTH,
Ann

Hi Ann,
Is “remove or comment out the check” the same as running or not running the scan via sonar-scanner? Or is there something like a -Dsonar.checks.qualityGate=False option that can be set/unset? Because if we don’t run the scan on push to master, then we would also lose the analysis reports for the master branch on the SonarQube web server, right?
Thanks,
Ben

Hi Ben,

Take a look at your pipeline. You should see that analysis and Quality Gate status are two separate entries.

 
HTH,
Ann

Hi Ann,
What do you mean by “pipeline”? Do you mean the GitHub Actions pipeline or is there another pipeline somewhere that I’m unaware of?
Our GitHub Actions pipeline has a final step of “run-sonarqube-analysis”, which executes the sonar-scanner ... command with a number of different arguments, none of which appear to be related to the Quality Gate. And as far as I can tell we’re not making any explicit calls to a Quality Gate status step.
I don’t know if it’s relevant, but the failure doesn’t show up as part of our normal GitHub Actions workflow/pipeline. It gets added to the pushed commit build list separately as a step called SonarQube Code Analysis which runs under a workflow(*) which shares a name with the decorator application/bot. So I have no idea where this step/job/workflow is coming from. As far as I can tell it isn’t running on Github, so does it get run on the SonarQube server and then added as an external step via some method I’m not familiar with?

(*) I call it a “workflow” because it’s listed in the same pane with our specified workflow (the one which runs our tests, the SonarQube analysis, etc), but I’m not sure if that’s how it’s being run or if that’s what it is. Our specified workflow includes an on: push subtitle, while the workflow named after our decorator application/bot does not have any subtitle at all.

Thanks,
Ben

Hi,

Sorry, in GHActions it’s a workflow. In other places it’s called a pipeline.

If you’re seeing a :x: in GitHub… the Quality Gate check is happening in GitHub.

You’re looking for something like this:

    # Check the Quality Gate status.
    - name: SonarQube Quality Gate check
      id: sonarqube-quality-gate-check
      uses: sonarsource/sonarqube-quality-gate-action@master
      # Force to fail step after specific time.
      timeout-minutes: 5
      env:
       SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
       SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }} #OPTIONAL

From here and/or here

 
HTH,
Ann

Hi Ann,
A few things to follow-up:

  1. If the call was happening in GitHub, wouldn’t that cause a failure in the workflow/pipeline in which the analysis was called? Our ‘failure’ is a failed check which happens outside of the workflow that calls the analysis.
  2. Relating to above and your previous comment, I greped for “sonarqube-quality-gate-check” in our repository and it returns no matches. This leads me to believe that this is getting called either by the SonarQube web instance or by the sonar-scanner under the hood.
  3. The failing check happens under a “workflow” called “SonarQube Pull Request Decorator”. This is the name of our decoration application/bot. The only “step” in this “workflow” is called “SonarQube Code Analysis”. Neither of these strings are found by grep either.
  4. The “SonarQube Code Analysis” step (the one with the failing check) does not appear anywhere under the “All workflows” sections at github.com///actions. So I again can’t find where this workflow is defined/called/run.
  5. In SonarQube web instance Compute Engine logs, I see the following:
... INFO  ce[][o.s.c.t.CeWorkerImpl] Execute task | project=... | type=REPORT | branch=master | branchType=BRANCH | ...
... 
... INFO  ce[...][o.s.c.t.p.a.p.PostProjectAnalysisTasksExecutor] Pull Request decoration | status=SUCCESS | time=0ms
... INFO  ce[...][o.s.c.t.p.a.p.PostProjectAnalysisTasksExecutor] Report branch Quality Gate status to devops platforms | status=SUCCESS | time=2441ms
... INFO  ce[...][o.s.c.t.CeWorkerImpl] Executed task | project=... | type=REPORT | branch=master | branchType=BRANCH | ... | status=SUCCESS | time=31330ms

I notice that for Pull Request branches the “Pull Request decoration” step has non-zero time and the “Report branch Quality Gate status to devops platform” step has a time of 0ms. For the runs on the master branch that’s reversed and the devops report time is non-zero. I don’t know if this is relevant to failures/decorations we’re seeing.

Thanks,
Ben

Hi Ben,

How about a screenshot of what you’re seeing?

 
Ann


Hi Ann,
This is an edited screenshot of what I’m seeing.
As you can see, there are 3 workflows showing up for this commit. The first two (names redacted in Red) are expected workflows with workflows defined in the .github/workflows folder. The second one calls the “run-sonarqube-analysis” step, which as previously discussed runs the sonar analysis (but does not do a sonarqube-quality-gate-action).
The third workflow, titled “SonarQube Pull Request Decorator” does not have a definition in .github/workflows. It also does not have a workflow history anywhere in the “Actions” tab. The only way we see this is going to the commit history and following the red X.
The red X only appears on commits which trigger the second workflow (the one with the run-sonarqube-analysis step), so they are clearly related, but it’s not something we can control.

Thank you,
Ben

Hi Ben,

Thanks for the screenshot and for your patience.

So what you’re objecting to is the Quality Gate decoration, which is a core feature of the integration.

And the reason you object is that it panics the developers and that you don’t strictly enforce the conditions that are failing, even if you do want them checked.

Based on that understanding, I’m pretty confident in saying we’re probably not going to change anything related to this in the direction you would want it to go.

I’m going to nonetheless refer it to the Product Managers.

 
Ann

Hi Ann,
That seems largely correct, with two follow-ups:

  • I guess part of our objection would be that the commit gets marked as broken even when everything for the commit itself works fine. It seems like an odd inversion in terms of ordering a rollout. For example, since we’re introducing/rolling out SonarQube it seems like the normal order of operations would be to limit things to per-commit/PR first (e.g. start with PR decoration, then move to PR decoration with quality gate checks to enforce each PR), then move to whole-branch new code code analysis + enforcement, then move to overall code enforcement. Jumping straight to failing the build without first failing the PR seems counterintuitive.
  • Is this feature documented anywhere?

Thanks,
Ben

Hi Ben,

Ah, but according to your own standards (the Quality Gate) it’s not fine. That’s why we mark it broken. :woman_shrugging:

Yeah, this is a fair point, and to me it’s the real question. Why didn’t the PR fail its Quality Gate. Causes that I can think of off-hand

  • coverage isn’t being reported at all on PRs (IIRC, if you fail to pass in any coverage reports, we figure you just don’t care & don’t sound the alarm about “null < 80% Coverage on New Code”
  • it’s not a failure in a single PR, but a cumulative one. The math on that is a little head-scratching, but just wait…
  • by default, we don’t enforce coverage and duplications conditions on new code when fewer than 20 lines are changed. So the PR limbo-ed under the wire, but the branch was too big. Add up multiple small PRs and you can end up with a big hit to the target branch. BTW, there’s a global setting to turn this off for the instance.

… you mean the Quality Gate decoration? Yeah. We don’t just document it, we brag about it:

 
Ann

Hi Ann,
Firstly, it sounds like this is desired behavior and there’s no way to disable it and turn it off (without disabling the application which also does pull request decoration), and there won’t be anytime in the foreseeable future. (And there’s no way to have a Quality Gate for PRs and a separate one for the branch analysis, correct?) In that case, I think you’ve answered the initial question/issue, and we can proceed on our end. Thank you for your help and patience on this topic.

I do however want to add to the ongoing discussion here, just for whatever additional context it might provide.
On the topic of marking the build as broken, I think it’s important to note that not everyone would define “broken” that way. My company and the project has been around for 10+ years. For all that time, a red X on a commit has meant “there is an issue with the current commit that requires immediate attention”. Now maybe that’s a meaning that should change internally, but:
1) I didn’t know that SonarQube would add the failing check. I had seen the section of the documentation that you highlighted, but incorrectly assumed that was referring to the sonarqube-quality-gate-action. I still can’t find anything specific that describes the failing check behavior.
2) Because I didn’t realize SonarQube would add a failing check to the master branch, we did not have a discussion about the meaning of the failing check.
This is all relevant because I’m pushing for SonarQube adoption, but I don’t have absolute authority within my organization. It’s a lot easier to get others on-board (both for more integration as well as adoption into more projects) if SonarQube is “that tool that helps me with ______” and not “that thing that always marks the build as broken”. Additionally, the people most likely to be entrenched and resistant to new tooling are the ones who have been on a project the longest - the most senior and influential. So I’m trying to ease people into it, focusing on Pull Requests (individual level) rather than whole-branch issues (cross-team level) that are out of scope. That way people build familiarity and positive experiences, and I can leverage the positive sentiment into more forceful integrations.

Anyways, thanks again for your help clarifying the issue, and I’ll mark this topic as resolved.
Ben

2 Likes

Hi Ben,

Unfortunately, there’s no way to decouple the behavior.

I appreciate your delicate position, having been the SonarQube pioneer and advocate in my previous company, and I would love to help you find a way to make this work. Since the sticking point is the failure on the main branch, can you create a new thread where we can explore that? I still feel that’s the crux.

 
Ann

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.