Code Coverage on a Pull Request/Merge at 0%

Hi,

I’ve just started using SonarCloud, integrated with Bitbucket pipelines.

The pipeline initials a scan on a push to master branch and/or a pull request.

I’ve done some test commits to familiarise myself with how SonarCloud works and reports.

In one test commit, I amended 13 lines across 5 files, and then created a pull request for the commit.

This initiated a scan, which failed the quality code because the Coverage on New Code was 0%.

I don’t understand this! What is “Coverage on New Code”? I read it as how much of the new code (13 line changes in 5 files), the scanner has reviewed. Surely, it should scan all of the changes in this pull request?

This is no doubt due to my lack of understanding/experience with what this metric is, but it’s a strange road block – the pull request contains clean code (no new bugs, vulnerabilities etc in the code changes), so I would expect the scanner to see the contents of the pull request / merge is clean and thus pass the Quality Gate.

Note: the codebase as a whole (when scanning the whole master branch) fails the quality gate (for code quality problems), but, specifically for this pull request/merge, I’m expecting that to pass, as it itself is clean.

What am I missing / not understanding?

Thanks
Rob

1 Like

Hello @Rob_Bathgate,

Coverage on New Code is about test coverage.

On pull requests we only report on issues we find in the newly added code. If the Coverage on New Code is 0% that means that there are no tests covering the newly added code, the standard quality gate fails when less than 80% of new code is not covered by tests.

You could change this behaviour by navigating to your organization on SonarCloud and creating your own custom Quality Gate. You can then assign this custom Quality Gate to your project.

Hope that clears it up,
Tom

1 Like

Thanks for the reply.

Why would there be no tests covering the new code?

Surely all the new code should be scanned, tested and assessed? Isn’t that the point?I need to be sure that the new code in the oil request has been scanned, tested and “safe”.

I’m obviously still missing something.

Thanks
Rob

The coverage on new code being 0% indicates that the newly added lines were not covered according to the coverage report.

It could be that the importing of the coverage report was not configured correctly. You can find the documentation to do that here.

All new code that is added in a PR is always scanned. The scanner however does not run any code tests. You are responsible to run the tests before invoking the scanner, and to supply the coverage report to the scanner.

If the repository is public I can have a look to check the configuration.

Thanks for the reply, I appreciate your help.

So to clarify, the scanner (which scans all new code) - is this doing assessments for bugs, vulnerabilities and code smells?

Or is that all done by ‘code tests’?

I’m unclear as to the difference between ‘scanning’ and ‘testing’ as to me they sound like they should be one and the same, just different terms.

If the scanner is not running any checks for bugs, vulnerabilities etc (because that’s done by the code tests), what’s the point of the scanner process?

Thanks for offer to review my config - the repo isn’t public, but I’m happy to share the pipeline, shown below. If this isn’t enough, I can create a test public repo and share that.

image: php:7.1.1 # Choose an image matching your project needs

clone:
      depth: full              # SonarCloud scanner needs the full history to assign issues properly

options:
  docker: true
  size: 2x
  
definitions:
      caches:
            sonar: ~/.sonar/cache  # Caching SonarCloud artifacts will speed up your build
      steps:
      - step: &build-test-sonarcloud
              name: Build, test and analyze on SonarCloud
              caches:
              - composer           # See https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
              - sonar
              script:
              #- **************************           # Build your project and run
              - pipe: sonarsource/sonarcloud-scan:1.0.1
      - step: &check-quality-gate-sonarcloud
              name: Check the Quality Gate on SonarCloud
              script:
              - pipe: sonarsource/sonarcloud-quality-gate:0.1.3
      services:
         docker:
           memory: 4096

pipelines:                 # More info here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
  branches:
    master:
      - step: *build-test-sonarcloud
      - step: *check-quality-gate-sonarcloud
  pull-requests:
    '**':
      - step: *build-test-sonarcloud
      - step: *check-quality-gate-sonarcloud

I sense a misunderstanding about the word “tests”. We mean unit tests, written by you. As you write code, you also write unit tests that execute units of your main implementation and verify that it works correctly, for example it produces the outputs that it should for some given inputs. Unit testing libraries can be configured to produce a “coverage report”, which describes the lines of the main implementation files that were executed during the unit test runs. Then, your SonarCloud analysis can be configured to import these test coverage reports, to display this very important metric for your project.

I hope this clears things up! See the links included in Tom’s posts to learn how to import the coverage reports, if you have them. (If you don’t have them, or don’t know if you have them, you need to look into the documentation of your unit testing framework to find out how to generate coverage reports, and find a format that is supported by SonarCloud.)

1 Like

Ah ha, thanks!

So, essentially, if I just want to use SonarSource to scan my code for vulnerabilities, bugs and code smells (and not actually testing the running/execution of the code), I can ignore this metric (and set it to 0 in the quality gate)?

Thanks
Rob

If you don’t care about test coverage, then yes, you can do that. But test coverage is extremely important to keep at a reasonable level, to ensure your code works, and as a safety net to verify it still works as intended as you make changes to the implementation.