Sonar Cloud Reporting Data

We are using SonarCloud in our organization and want to show management the impact it has on developers and code quality. Specifically, we want to answer questions like:
• How many issues did Sonar raise?
• How many of those issues were fixed by developers (especially during PRs before merge)?
• How do we demonstrate that Sonar is actively helping developers improve their code, not just reporting numbers?

We are planning to use Power BI for reporting and would like to know how other companies approach this.

:backhand_index_pointing_right:
How are we measuring and reporting the impact of Sonar in other organizations?
• Do you track issues raised vs issues fixed?
• Do you capture how developers respond to Sonar’s PR feedback?
• Do you connect Sonar activity to higher-level business outcomes (fewer bugs, better releases, etc.)?

Any guidance, examples, or best practices would be greatly appreciated.

Thanks!

@ G Ann Campbell @Wouter Admiraal @Olivier Korach can you please help us here?

Hi,

Welcome to the community!

I invite you to familiarize yourself with the FAQ, and in particular this section:

I created a topic, when can I expect a response?

This is an open community with people volunteering their free time to provide assistance. We’re eager to contribute to the community, but you are not guaranteed a fast response.

Be patient

  • Wait a few days before bumping a topic that hasn’t received a response.
  • Do not @name mention individuals not involved in the topic.

The waiting part you’ve done, and bumping this thread makes perfect sense. But you’ve @-ed people not already involved in your thread. And worse, you’ve @-ed only SonaSourcers, while your question seems targeted to non-SonarSourcers (emphasis mine):

It’s unclear to me what you expect in this context or what answers you expect SonarSourcers to give. We cannot answer for other companies. But please do not @ people not already involved in your thread.

 
Thx,
Ann

Hi Ann,

Thanks for the clarification — that helps, and I understand now why tagging staff directly isn’t the right approach here.

To reframe our question more clearly for the community:

We’re trying to understand how to capture issues that are raised in a Pull Request and then fixed before the PR is merged.
We already use APIs like /api/measures/component and /api/qualitygates/project_status, but these don’t seem to expose that “fixed within PR” view.

Is there a recommended API, approach, or ETL method for this? Or has anyone here been able to report on “fixed in PR” issues successfully?

Thanks in advance for any guidance!

Hi @Dinesh_Deva

We sort of have a similar need - we want to capture the issues that caused a PR or branch analysis to fail. Sometimes engineers will have some problem with an analysis, report it to us, but find a workaround and fix the problem.

By the time we look at it, there has been a new analysis and Sonar does not provide a way to “look back” at historic analyses to see what the problems were, so we had to come up with our own workaround which is this:

If an analysis fails, capture all of the new issues and all hotspots that contributed to the failure.

We use the qualitygates/project_status endpoint to determine which gates have failed, and then retrieve what we want from the issues/list and hotspots/list end points, depending on which things failed.

We use the list rather than search end points because those might fail with a 503 if issue indexing is in progress whereas the list ones will not. The downside is they do not have as rich a feature set for filtering, but they are good enough for our needs.

You may be able to get what you need e.g. from issues/list with inNewPeriod=true and resolved=true - I believe that should gives you all issues that were created and resolved in the new code period - which is essentially “the PR” for PR analyses. I have not tested this though so you would need to check if my reasoning is sound.

Or if you want all issues resolved, new or not (perhaps a better indicator of improving quality) then remove inNewPeriod=true and just have the resolved=true filter.

One limitation of this is that you need to do it immediately the analysis ends because it can only capture the issues at a point in time, so your builds will need to wait for the analysis to complete rather than running in the background.

Finally though, consider if this is giving you any real value at all - you want to report only on the issues that were raised and fixed in a PR. I could “game” the system by deliberately causing loads of issues in my PR and then fixing them - yah for me, I caused and fixed 100 issues in my PR, aren’t I great and isn’t Sonar adding lots of value? What does that even mean - that I am a terrible coder or good for me for fixing them? If the number goes down over time and next PR I only cause and fix 50 issues, have I improved as an Engineer? Is the PR smaller than the last one? Maybe it is simply less complex?

Why not simply show management the activity graphs which show the number of issues over time going down as existing issues are resolved and showing that the number of issues is not continuing to increase. You may be able to tie in then with a decrease in build (or test) failures or hotfixes as the devs produce better quality code.

That way you can show the trend over time towards quality…

anyway just a few ideas, hth.

1 Like