Can't seem to score less than an A in a PHP Application

Template for a good new topic, formatted with Markdown:

  • ALM used - GitHub
  • Languages of the repository - PHP (Symfony)

I recently started evaluating Sonar Cloud to see if it would solve my organization’s request for wanting a tool that will scan for various code quality metrics upon issuing a PR. I love the way the flow works, but for the life of me I can’t seem to put up a PR again the very simple repo I spun up for just for this, and get Sonar Cloud to actually give me a failing score.

I’ve created a custom Quality Profile based on “Sonar way”, and adding one single Code Smell to it on top of the default for my project: PHP parser failure. The default severity assigned to this is “Major”, and I went ahead and increased it to “Blocker” for testing purposes.

I put up a PR, and Sonar Cloud scans it and certainly reveals this issue in the Code Sniff. However, it still scores me with an “A” lets me pass. I may be missing some documentation, but how can I flag a rule as being so severe that it doesn’t allow me to proceed? To my opinion, a parsing error should stop things right then and there, and should by no means considered to be “passing” or a grade of an A.

Thanks very much for any clarification that anyone can provide!

I sort of answered my own question…purposefully triggering some code set as “Bug” versus “Code Sniff” immediately shut my PR down. Is this just inherent to the way a “Bug” is treated differently than a “Code Sniff”? To me this seems pretty arbitrary, since some of the things you can turn on under Code Sniff are pretty big issues.


Welcome to the community!

There are a few different things going on here.

First, your “A”:

For maintainability, the rating is determined by the estimated time to fix the code smells versus the size of the code base (converted into hours to write). So one code smell in a project of any size is still going to retain an A rating.

For reliability (bugs) the rating is based on the severity of the worst issue. Far easier to get a non-A rating

Now for “failing” your PR:

What we’re talking about is failing the Quality Gate, which in turn should cascade to blocking the PR*

The built-in Quality Gate focuses on new code. When you first start using SonarCloud that’s the one that will be applied by default. But you can certainly craft your own Quality Gates. Hopefully, you’ll retain a focus on new code, but you can also add conditions on overall code.

Handing parsing errors:

Two of the assumptions analysis proceeds under are:

  • the code being analyzed is compilable for compiled languages, or runnable for interpreted languages
  • (therefore) any parsing errors must be our fault, not the user’s

So under those two assumptions, we’ve provided rules in some languages to help you find and report parsing errors. But they’re intended as “findings” rather than as issues because the assumption is that you haven’t done anything wrong; we have.


* assuming you’ve configured your project to block the merge when a check is failing

Thanks very much for this fantastic explanation, I greatly appreciate it. The assumption of fault is kind of what made this all come together for me, and suddenly this makes a lot more sense. Of course it’s reasonable to assume that linters and many IDE-side tools can be used to prevent these things from happening before they even get to Sonar’s scan, I was just trying to go for some extreme cases.

1 Like