Failed quality gate historical report

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
    SonarQube Version 8.3.1 (build 34397)
  • what are you trying to achieve
    Generate a report for evaluating the evolution of code quality by means of number of failed quality gates per release
  • what have you tried so far to achieve this
    Through the WEB_API, I have tried several metrics (e.g., api/qualitygates/project_status, api/measures/search_history, api/issues, etc.)

Hi!,

I’m trying to pull a report of the number of failed quality gates over a period of time, across all pull requests, to have a sort of indicator on the evolution of the quality of the code our developers produce.

I have gone through the above options in the WEB API, but I haven’t come across something that suits my needs. Perhaps something in the line of (I know this doesn’t work):
api/qualitygates/search?projectKey=SOME_PROJECT_KEY&status=failed&from=2020-07-01&to=2020-07-29

Is there any way to pull information of that sort in any way?
Thanks in advance.

Regards,
JP

Hi JP,

You’re having a hard time finding this data because SonarQube was’t built to support this approach. In our view, what’s important is the current state and Releasability of the code, not the journey to get there.

In fact, if you measure developers on how many times the Quality Gate failed before merge you might just disincent them from committing (and thus having analysis run on the PR) when really everyone should be committing early and often. And really, all that matters is that the PR Quality Gate is green when you finally do merge.

Part of the point of SonarQube is learning to be a better coder - that’s one reason we put so much effort into writing good rule descriptions - so we haven’t built anything around measuring the journey to get there.

 
HTH,
Ann

Hi Ann,

Pitty. This metric would be used to evaluate if our training schema is of use, or if we would need to change it. It is not intended to evaluate/blame developers individually. That is why it is historical over a period of time, we are interested in evaluating the impact of our policies and efforts on the quality of the code.

Is there any alternative you could think of that would help us in that regard? Alternatively, we could consider number of identified issues (even focus on the ones above some criticality), but I´m not totally confident about this one.

Thanks,
JP

Hi JP,

One thing I forgot to mention in my initial response was housekeeping: analyses without events are cleaned out of the database on a regular basis which is going to skew any attempt at historical reconstruction.

If you’re looking for long-term trends, then the activity graphs might help you. Note that they’re going to be subject to the same housekeeping limitations I already mentioned, so over time the lines will tend to smooth out but should still have the same general shape.

What we recommend is that Quality Gates use ‘on New Code’ values. You won’t be able to graph those values because we don’t store old ‘on New Code’ measures, only overall measures. But by making sure your New Code is clean, you will gradually increase the overall quality. Details on the philosophy here.

Maybe this will help.

 
Ann

Thanks Ann for your tips. We can periodically pull the metrics and keep the record ourselves then.

Appreciate your help once more.