We have been experiencing a persistent issue with a subset of our merge request pipelines for the past few months. Occasionally, while developers are writing C++ code, their merge request will pass quality gates (we can verify that they are passed in SonarCloud), but upon merging the MRs, the main branch analysis will fail with the exact same code.
We are using GitLab CI integration. This seems to occur only with C++ code. We have tried everything that we can find on this forum, including:
Setting GIT_DEPTH to 0
Removing the Sonar cache
Ensuring that MR pipelines are serialized (so that they cannot be run concurrently)
We are using the SonarScanner CLI, and our CI definition is set exactly as specified here: GitLab integration
When this situation occurs, we can see that Sonar does in fact “see” the failing code in the merge request; we can, for example, see the coverage attributes for the code and see that the code was included in the analysis. Despite this, Sonar does not cause a failure in the MR analysis. When merged to main, the analysis immediately fails, and will continue to fail until we fix the issues with another MR.
Does anyone have any idea what is going on here? It’s very odd that this only happens for C++ code (the repository also contains Java, Python, and TypeScript code which never experience this issue). It feels like a bug in Sonar to us, but we are hopeful that someone else has experienced this before and can help out.
I’m going to guess that your C++ developers are working in smaller increments than the other devs. There’s a feature that disables failing the Quality Gate for coverage or duplications when the PR is smaller than 20 lines.
You can turn that off server-wide in Administration → General → Quality Gate (at the bottom of the page)
Thanks for the response. Does that setting apply to SonarCloud as well? I don’t seem to be able to find that. I’ve looked in both the organization settings and the project settings. We’ve seen this on some larger changes as well, so I don’t think this is the root cause, but I’d like to give it a try at least if possible.
Definitely issues, and I believe duplications as well. We have not seen it related to coverage to my knowledge.
Interestingly, I was able to push a tiny (3 line) change that I am sure should break (it caused main to fail last week), and the analysis did not fail. So at least now I have a consistent way to reproduce the problem. Do you know if the small change behavior that you were referencing originally applies to SonarCloud?
There’s actually no LOC limit on when issues are raised. Instead, it’s quite possible that what you’re running into is expected behavior with new issues on old code.
In PRs we only report issues raised on changed lines. It’s a mechanism to reduce FPs and other “noise” in PR reporting. Unfortunately, it has the side effect of suppressing reporting of issues newly raised on untouched code by changes in the PR. For instance if you delete the only use of a variable, the variable is now unused, but its delaration remains unchanged. That means no issue will be reported. There are other scenarios that can result on new issues in old code, but this is (IMO) the most emblematic.
No, unfortunately not. The change that I have that reproduces the issues adds the lines that should cause a failure (they’re extraneous case statements in a switch). So this PR should be flagged since it’s adding the offending code directly.
The analysis / scanner log is what’s output from the analysis command. Hopefully, the log you provide - redacted as necessary - will include that command as well.