Hello SonarQube Community,
We are using SonarQube Data Center 9.9 along with Jenkins to automate our CI/CD pipelines. For confidentiality, we’ll refer to our pipelines as Pipeline A and Pipeline B. While both pipelines fetch the code from the same Git repository, they use different checkout configurations. Recently, we’ve encountered issues with Pipeline B that don’t occur with Pipeline A, and we’re trying to understand whether the way we check out the code might be influencing these issues.
Background:
- Pipeline A (Simple Checkout with
checkout scm
):
-
In Pipeline A, we use the default
checkout scm
step, which is based on Jenkins’ GitSCM plugin. This step checks out the code from the configured branch (e.g.,develop
) and does not create a local branch but checks out the remote tracking branch (e.g.,origin/develop
). -
In the Jenkins build summary, the checkout information appears as
origin/develop
. -
Jenkins checkout info:
-
Checkout step:
checkout scm
- Pipeline A has not shown any issues related to SonarQube analysis. The analysis seems to work fine, with no new issues being flagged on old code (e.g., code from 2013).
- Pipeline B (Complex Checkout with GitSCM Configuration):
-
In Pipeline B, we use a more customized checkout configuration, specifying GitSCM with additional options like branch (
*/develop
), credentials, and other extensions. This checkout configuration leads to the code being fetched from the remote branch (origin/develop
), but the branch information in the Jenkins build summary appears asrefs/remotes/origin/develop
. -
Pipeline B has shown intermittent issues, where new SonarQube issues are flagged on old code (e.g., from 2013), even though there have been no changes to this code or to any SonarQube rules. These issues seem to appear after running Pipeline B and disappear when we run Pipeline A.
-
Jenkins checkout info:
-
Checkout step:
checkout scm: [
$class: 'GitSCM',
branches: [[name: '*/develop']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'CloneOption', noTags: false, reference: '', timeout: 60]],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: "${credId}",
url: 'confidential_git_link.git'
]]
]
The Problem:
- When Pipeline B runs and completes the SonarQube analysis, new issues are sometimes flagged on old, untouched code (e.g., from 2013). Importantly, no changes were made to these files or to the SonarQube rules.
- After Pipeline B runs, these new issues appear, but when we rerun Pipeline A, the issues disappear, even though there have been no code changes or rule modifications.
We are trying to determine if the differences in how the code is checked out in Pipeline A and Pipeline B could be influencing this behavior, particularly in relation to SonarQube’s analysis process.
Questions:
-
Could the difference in how we checkout the code in Pipeline A (
checkout scm
vs. explicitGitSCM
checkout in Pipeline B) influence SonarQube’s analysis, particularly with regard to flagging new issues on old, untouched code? -
Does SonarQube depend on the branch structure or specific reference types (like remote tracking vs. local) for analysis, and could this influence how it handles old code that hasn’t been touched?
Our Setup:
- SonarQube Version: Data Center 9.9.7 LTS
- CI/CD Tool: Jenkins
- Pipeline A: Uses
checkout scm
(checks outorigin/develop
as a remote tracking branch). - Pipeline B: Uses a more customized checkout configuration (checks out
refs/remotes/origin/develop
). - Both pipelines checkout the latest version of the branch but in different git “structures”
- Pipeline A: Uses