SonarQube Security Hotspot resolution status is not retaining correctly

Used software:
SonarQube Developer Edition 9.9.1 LTS
Sonar-scanner 4.8.0.2856
Jenkins 2.387.3 LTS
SonarQube Scanner for Jenkins plugin 2.15
SonarQube deployed inside Kubernetes with Docker image

Hi Community. We are using SonarQube in our CI infrastructure and faced with weird behaviour of security hotspot resolution status retaining. Please help us to understand the root of this SonarQube behaviour.
In project, as a new code definition for a branches we are using Reference Branch and it points to master.
The example of how the issue workflow looks like is below:

  1. Security hotspot was merged into master from some feature/Previous_Feature_Brach branch.
  2. During SonarQube analysis of master branch, SonarQube detected the security hotspot “Make sure using this hardcoded IP address “1000::70” is safe here”.
  3. Next, we marked the security hotspot as a safe.
  4. Next we created a new feature branch feature/Feature_Branch from master branch.
  5. During the analysis of the feature/Feature_Branch SonarQube detects the security hotspot marked as a safe and it is seen in activity (Sensitive data was changed):
## Recent activity:

* May 4, 2023 at 11:14 AM

The issue has been copied from branch 'master' to branch 'feature/Feature_Branch'


**user1** -April 29, 2023 at 12:29 PM

Resolution changed to SAFE

Status changed to REVIEWED (was TO_REVIEW)

* April 29, 2023 at 1:47 AM

The issue has been copied from branch 'feature/Previous_Feature_Brach' to branch 'master'


**user2@users.noreply.github.com** created Security Hotspot-April 28, 2023 at 1:42 PM
  1. Next, another feature branch feature/another_feature_branch was created from master but in another date.
  2. During SonarQube analysis of feature/Another_Feature_Branch, SonarQube detects the same security hotspot “Make sure using this hardcoded IP address “1000::70” is safe here” but it has status TO REVIEW:
###Recent activity:

user2@users.noreply.github.com
created Security Hotspot
-
June 8, 2023 at 10:35 AM
  1. We changed the issue status as safe and push the feature/Another_Feature_Branch into master branch.
  2. During the analysis of master branch, SonarQube found the same security hotspot “Make sure using this hardcoded IP address “1000::70” is safe here” despite that we already set it’s status as a safe previously and we need to change the security hotspot resolution to safe again:
## Recent activity:

**user3** -June 10, 2023 at 1:56 PM

Resolution changed to SAFE

Status changed to REVIEWED (was TO_REVIEW)

* June 9, 2023 at 1:47 AM

The issue has been copied from branch 'feature/Another_Feature_Branch' to branch 'master'

**user2@users.noreply.github.com** created Security Hotspot-June 8, 2023 at 10:35 AM

What we expect:

  1. If the security hotspot was marked as a safe it should retain it’s status to another branches crated from master.
  2. If the security hotspot was marked as a “safe” in master branch before, it should’t create the same security hotspot with.

Please help to find solution to solve this weird SonarQube behaviour.

Great thanks for advice!

Used software:
SonarQube Developer Edition 9.9.1 LTS
Sonar-scanner 4.8.0.2856
Jenkins 2.387.3 LTS
SonarQube Scanner for Jenkins plugin 2.15
SonarQube deployed inside Kubernetes with Docker image

Hi Community, we have faced with weird behaviour of SonarQube, could you please help to understand why the issue appears.
How the workflow looks like
We are using both pull requests and branch analysis for our CI purposes. During PR analysis SonarQube find the security hotspots:
Make sure using this hardcoded IP address "1000::70" is safe here.
Then we marked security hotspot as a safe and next merge feature branch into master.

Recent activity:

May 8, 2023 at 1:00 PM
The issue has been copied from branch 'feature/BRANCH_NAME' to branch '#792'
8
user@users.noreply.github.com
created Security Hotspot
-
May 8, 2023 at 12:53 PM
May 4, 2023 at 11:14 AM
The issue has been copied from branch 'master' to branch 'feature/BRANCH_NAME'

user2
-
April 29, 2023 at 12:29 PM
Resolution changed to SAFE
Status changed to REVIEWED (was TO_REVIEW)
April 29, 2023 at 1:47 AM
The issue has been copied from branch 'SOME_BRANCH' to branch 'master'

But, after that, during the SonarQube master branch analysis the same security hotspot appears again and we need to mark it as a safe again.

Recent activity:

user3
-
June 10, 2023 at 1:56 PM
Resolution changed to SAFE
Status changed to REVIEWED (was TO_REVIEW)
June 9, 2023 at 1:47 AM
The issue has been copied from branch 'feature/OTHER_BRANCH' to branch 'master'
user@users.noreply.github.com
created Security Hotspot
-
June 8, 2023 at 10:35 AM

As I understand, the Issues marked as a safe during the pull request analysis will retain that status after merge into master, and will not appear again during the master scan. Maybe I have missed something?

Thank you for advice!

The issue happened again on our SonqrQube. Do we have any updates about it?
Thanks!

SonarQube Developer Edition 9.9.1 LTS
Sonar-scanner 4.8.0.2856
Jenkins 2.387.3 LTS
SonarQube Scanner for Jenkins plugin 2.15
SonarQube deployed inside Kubernetes with Docker image

Hi Community.
In our CI infrastructure we are using the SonarQube to analyse C++ code. But time after time we are facing with security hotspots resolution issues. I have already mentioned that in another topic: SonarQube Security Hotspot resolution status is not retaining correctly, but in short: the Security Hotspots appears again even that was reviewed during the PR scan.
Could you please clarify, with which status the Security Hotspot should be reviewed ( Acknowledged, Fixed or Safe) during the pull request scan, to retain it’s status after merging into master branch?
Or, maybe reviewed status doesn’t matter and we are facing with the issues due to misconfigurations in SonarQube server itself?

Thank you for your help

Hello @yevhenhnes,

Thanks for reaching out and for the details you shared!
Unfortunately, we were not able to reproduce the issue yet.

Similar issues have been reported by other users and I’ve opened a ticket to gather all the relevant information.
You can track the progress there.

If we need more information during the investigation, we’ll reach out to you in this thread.