Pull Requests and "Conditions on Overall Code"

Using SonarQube 8.6 EE
PR Decoration configured and working, using GitHub.

I have a small collection of “commons” maven projects that are used as dependencies in our 80 or so “production” projects. Thus, I take the view that a commons vulnerability or security hotspot can have a big impact downstream and have adjusted the quality gates used by these specific projects to add “Conditions on overall code”.

The result is that I see builds of main and branches that are failing the Quality Gate. But the Quality Gate is not failing in the PR itself:

The problem is that the PR decoration is so pretty that the developers are only looking at things in GitHub and addressing specific items that are failing. That’s great in one respect… the end result is that PRs are getting all code smells fixed even when the QG is passing. But it means that the devs are not seeing in GitHub that there is an underlying problem that must be looked at.

Is there anything that can be done about this? Is this a shortfall in PR decoration implementation, or have I maybe done something wrong?

Hi Mark,

Overall metrics in the quality gate as you can see in your screenshot apply to BRANCHES only, not pull requests.

You need to duplicate the same metric in the ‘New code’ section to have them apply to pull requests as well.

Hi Mark,

To add to @Rouke.Broersma.IS reply, pull requests are only checked for violations of criteria for new code by definition.

Carine

Hi @Carine_Bayon

Is it possible to set up so that pull requests use a quality gate on overall code?

Regards,
Tim

HI @timothyconnolly5
I’m not sure I get your point here. Pull requests are new code by definition, why would you use a quality gate on Overall code for PR decoration?

Carine

Hi @Carine_Bayon

I suppose its a question around the flexibility of the sonarQube product, why strictly only allow new code quality gate on pull requests?

Overall code quality gate could be good for legacy applications moving to newer repository or CI/CD tooling.

I also find that new code quality gate on PRs seem to visually hide problems as @msymons describes, the quality gate passes with As but other underlying issues which have not yet been resolved still exist.

Regards,
Tim

1 Like

Hello @Carine_Bayon,

I think a PR has an impact on the entire code and could use some “Conditions on Overall Code”.

For example if some new code makes some unit tests to fail, it should be reported in the quality gate of this PR.
And also, one could ask the code Coverage to 80% on the Overall Code and not just the New Code.

Kind Regards,
Brice

Hi Brice,

Welcome to the community!

Our take is that if the unit tests fail, the pipeline / job shouldn’t proceed to analysis; the build should fail immediately.

Would you mind explaining your use case? Our take is that only enforcing e.g. Coverage on New Code at 80% allows everyone to meet the same high standard: good coverage on new code whether they’re in a Greenfields project - that probably started with a good coverage level - or Legacy code which may have a coverage % in the single digits.

 
Ann

Hello Ann,

Thanks for the greetings.

I’m currently working on a big legacy monolith whose parts are being extracted. I talk about Unit tests but they are mixed with Integration tests.

We have a long pipelines (~1h) and usually, a developer comes back to a PR after a few hours/days, so I am experimenting the possibility to have everything in the same place.
Gitlab allows me to display the result of the tests in the Merge Request but it is too discreet.
This is why I think, adding it to the quality gate/Sonar Approval, would be a big help.

As for the Coverage, we are implementing tests on the parts that are stable and are likely to stay this way for some time. On the other hand, we have some black swamps that still need fixing or that we can extract and rewrite into a new microservice. I try to lead the coverage for this monolith as a general indicator of its health and not to antagonize other developers to spend 90% of their times creating tests just because they had to correct a few lines somewhere in the swamp.
(Currently at 17% of coverage, compared to 2% last year, and none, two years ago before I joined the company :tada:)

Best Regards,
Brice

1 Like