Do you fail your build / pipeline when the Quality Gate fails?

Is a failing Quality Gate a stop-the-world event for you or just another day at the office? Do you fail your build / pipeline when the Quality Gate fails, or let it go all the way through? And why?

5 Likes

Not yet as we’re testing our Quality Gate. We allow to fail on branches but require pipelines in our CICD to pass before an MR can be merged.

We will be blocking merges between our env based on a scan of the main branch. Once we resolve any existing bugs.

2 Likes

We will always deploy to DEV regardless of result. However, if it fails we prevent the code from moving up the environment stack into the higher environments. There are exceptions to this rule for some applications tho.

2 Likes

No, most of our projects do not fail the build for Quality Gate failures. Instead, we use the GitHub integration to decorate the Pull Request for Quality Gate failures. This allows us to separate build failures from quality failures and the admin for the project can see and override the failures, if necessary. For example, we had a PR that was 99% documentation and failed our quality gate for test coverage. The human admin reviewed and deemed the quality failure acceptable and merged the code. Keeping the failures separate speeds up the time it takes to investigate a failure as it narrows the scope of what the investigation needs to look at.

3 Likes

Yes, and with immediate email feedback to the developer who committed the code change that triggered the build (through Azure DevOps). This ensures that they can fix any quality issues right away while the relevant code is still fresh in their mind, and prevents “ah, we’ll fix that later” issues from creeping through.

If a build doesn’t pass the Quality Gate I’ll barely even review the PR - it won’t be merged anyway and I don’t consider the code as delivered until the QG shows green. Once that’s the case I’ll review the PR and any other issues that SonarQube may show, but that are too trivial to trigger a QG failure. Most of the time I’ll ask the developer to fix those issues as well, but I may choose to move a PR ahead instead.

While this was painful at first, it’s lead to some really clean code and measurably fewer bugs over time. Vulnerability scans also come back much cleaner.

4 Likes

Our pipeline is „build - sonar - ui tests - release“. If quality gate fails we mark the build as unstable, same with testfailure. If the build is unstable we fail at the end and do never release, no exception. Otherwise the „we fix this later“ mindset will kick in, which is bad for all.

3 Likes

Failing the build on Sonar failure means waiting for Sonar itself to analyse the results it has been sent - which can add 10 minutes to a build of a large project in our experience.
This is not realy desirable so we send the analysis to Sonar and continue with other build steps like creating packages or producing other reports.
We would then like to rely on GitHub blocking the merge if the Sonar analysis fails - which minimises the impact of Sonar on the build time and allows us to override if we really want to merge anyway.

Unfortunately this does not work in a mono repo because we only build the application(s) that have changed but would have to wait on all applications reporting the Sonar analysis results…which some of them of course never will since they weren’t triggered for that PR.
So currently in our mono repo we cannot prevent merging on Sonar failure and would have to take the hit on the additional build time to wait on Sonar to report the pass/fail status to the build itself.

1 Like

Is there documentation on how to fail/not fail the build on a failed quality gate? I can’t how to fo this for different pipelines (GitLab, BitBucket, AWS, Azure…)

Is there documentation on how to fail/not fail the build on a failed quality gate? I can’t how to fo this for different pipelines (GitLab, BitBucket, AWS, Azure…)

Here: CI integration overview (sonarqube.org)

In our case we are using Team City for CI so we pass sonar.qualitygate.wait=true at the command line.

2 Likes

As a DevOps coach, one of my duties is to review applications processes for those who want ZERO manual action, including the creation of production change ticket, automated go live… You name it.

One of my golden rules is : applications must MANDATORILY have a blocking Sonar scan on the critical path to production.
I usually suggest to adapt the Quality Gate, depending on the criticality / complexity of the application, maturity of the dev team… However, the presence of a blocking Quality Gate is essential to prevent the quality of the code to bomb without anybody knowing it.

I explain to the teams that don’t yet have this blocking process that this is for the safety of their users as much as it is for themselves and the health of their application. Maintaining a consistent level of quality ensures a consistent behaviour in production.

A blocking Quality Gate is actually a friend!

5 Likes

Absolutely, we fail our build/pipeline when the Quality Gate fails. I remember a project where we didn’t enforce this initially. One day, a minor issue slipped through, causing major bugs in production. After that, we decided to always fail the build if the Quality Gate fails. It was a turning point that improved our code quality and saved us from future headaches.

3 Likes