Details on Housekeeping DbCleaner properties with description and range

Hello Everyone,

I’m checking the Housekeeping setting under Project Settings > General Settings > HouseKeeping. These are set to some default values at the moment.

To make a change in default values at an organisation level the change needs to be made in “Administration > General > Housekeeping .”

I’m using the below version and documentation for the above info:

Version: Enterprise Edition Version 9.9.3 (build 79811)
Documentation: SonarQube housekeeping

I’m not having access to the Administration Settings so I need help in understanding the below points to complete my analysis and request changes from the admin. The analysis snapshots for the same day are getting deleted (when there are more than one) and I want to prevent that:

Quoted from docs: “Only one snapshot per day is kept after 1 day. Snapshots marked by an event are not deleted.”

  • Can this housekeeping rule be simply disabled?
  • What is a snapshot marked by an event?
  • Can I mark a sonarqube analysis run against any commit SHA in the main/release branch as an event so it’s not deleted?

I also request someone to share a description and acceptable range of values for the following properties:

  • sonar.dbcleaner.hoursBeforeKeepingOnlyOneSnapshotByDay
  • sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByWeek
  • sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByMonth

Any reference in documentation of the description or range is also helpful or any use case examples of the above, I couldn’t find any. All I found was this piece of info but not sure of its accuracy.

Source: sonar-tools · PyPI

INFO:

  • DB Cleaner:
    • Delay to delete inactive SLB (7.9) or branches (8.x) between 10 and 60 days
    • Delay to delete closed issues between 10 and 60 days
    • sonar.dbcleaner.hoursBeforeKeepingOnlyOneSnapshotByDay between 12 and 240 hours (0.5 to 10 days)
    • sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByWeek between 2 and 12 weeks (0.5 to 3 months)
    • sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByMonth between 26 and 104 weeks (0.5 year to 2 years)
    • sonar.dbcleaner.weeksBeforeDeletingAllSnapshots between 104 and 260 weeks (2 to 5 years)

Thank you for your time.

1 Like

Hi @Akash_Dwivedi,
Why do you want to keep all analysis for main/release branch ?

You may found found some explanation in this topic Keep analyses and everything else forever (turn off dbcleaner) possible? - #4 by ganncamp

And the description of these parameters from the SonarQube user interface :
Keep only one analysis a day after
After this number of hours, if there are several analyses during the same day, the DbCleaner keeps the most recent one and fully deletes the other ones.
Key: sonar.dbcleaner.hoursBeforeKeepingOnlyOneSnapshotByDay
Default : 24

Keep only one analysis a week after
After this number of weeks, if there are several analyses during the same week, the DbCleaner keeps the most recent one and fully deletes the other ones
Key: sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByWeek
Default : 4

Keep only one analysis a month after
After this number of weeks, if there are several analyses during the same month, the DbCleaner keeps the most recent one and fully deletes the other ones.
Key: sonar.dbcleaner.weeksBeforeKeepingOnlyOneSnapshotByMonth
Default : 52

Keep only analyses with a version event after
After this number of weeks, the DbCleaner keeps only analyses with a version event associated.
Key: sonar.dbcleaner.weeksBeforeKeepingOnlyAnalysesWithVersion
Default : 104

Delete all analyses after
After this number of weeks, all analyses are fully deleted.
Key: sonar.dbcleaner.weeksBeforeDeletingAllSnapshots
Default : 260

Delete closed issues after
Issues that have been closed for more than this number of days will be deleted.
Key: sonar.dbcleaner.daysBeforeDeletingClosedIssues
Default : 30

1 Like

Hi Bachri Abdel ,

Thanks for looking into this. I’ll share some more context for the community

I’m using pre-promotion checks and that use case requires sonarqube result data for every individual commit to main branch.

A pre-promotion check is basically a set of validations performed right before moving the code from a lower environment to a higher environment.

Say a PR is merged in main branch and the latest commit hash is “be6c75b85da526349c44e3978374c95e0b80a96d” Now sonarqube will be run for this commit and say the data is stored in a analysis Id “A2”

In pre-promotion check, it’ll make a GET request using GET api/project_analyses/search

This will give a response with below schema and then extract and store the highlighted revision and analysis key from the JSON response

It’ll then compare the revision with the commit that is being promoted and once it is matched, it’ll make a GET request to GET “api/qualitygates/project_status” to fetch the status and pass the analysis ID “A2” as a parameter to the GET request.

This gives me a response which tells me the status sonarqube status against that revision

The above validation system works perfectly even if 20 PRs are merged in a day and the pre-promotion checks happen immediately. But in the case the check happens any day after (say a code freeze scenario), only the last analysis is present and the other 19 analysis ae missed.

That’s why I asked the following questions

If I disable " Keep only one analysis a day after" property, I’ll be able to retain all analysis till 4 weeks after which " Keep only one analysis a week after" will take care of the clean up. Also we’ll be able to increase and decrease this retention period by changing the later property.

If disabling " Keep only one analysis a day after" is not an option, I wanted to know the range to which it can be set, it defaults to 24 hours. Can I change it to 672 to retain all analysis for 4 weeks? (24 hours x 28 days = 672 hours) Is there an upper limit to this property or can I even set 10000000000000000 hours and keep all analysis for that period?

My third pick was to create snapshot with an event but I’m not finding a good roadmap to achieve that in the documentation

I can’t change the underlying logic as it’s aligned with our CICD setup, but this issue pops up once in a while, and we workaround by just running sonarqube for that commit again, but would like to mitigate its occurrence by tweaking the housekeeping rules.

Lastly, the descriptions provided in the previous answer are available under Project Settings > General Settings > Housekeeping and I had already checked those. Any other info/inputs on the above three queries or some info on how Administration > General > Housekeeping , can configure these properties, will be helpful

Thank you for your time. Have an awesome day!

1 Like

Hi @Akash_Dwivedi ,
Out of curiosity, I don’t quite understand the pre-promotion checks. Is this a specific mechanism that you have put in place? How does the functioning of the blocking quality gate not meet your needs?

I would be interested to learn more about your approach.

Hi Bachri Abdel,

Yes pre-promotion checks is a specific mechanism put in place in our delivery pipeline by DevOps Team(my team). These are a set of validations we do while moving code from lower environment to higher environment. Basically all these checks should pass before code promotion from development to higher environments.

Sonarqube validation is a part of pre-promotion check. We manage 1000s of microservices and they all have sonarqube check running in the main and release branches of their respective repositories. Every microservice is a different project under sonarqube.

Now promotion to a higher environment might occur on the same day or on a different day. If a developer tries to promote the code to a higher env, the pre-promotion checks uses the 2 API calls mentioned in my previous answers to determine if the quality gate standards are met or not for that particular commit.

Scenario based example:

Say there’s a code freeze for a week due to some maintenance activity and code is not moved from dev to production for a week. For the entire week dev team works on 3 features parallelly. After a week two features are completed and one is under-development. In such a case, the dev team would want to promote the two ready features. To achieve that they’d create a release from an older commit and not include any changes from the 3rd feature.

When we try to promote this older commit, the validation (pre-promotion check), will say there’s no analysis for that commit in sonarqube as the analysis for only the last commit every day is present in sonarqube due to * sonar.dbcleaner.hoursBeforeKeepingOnlyOneSnapshotByDay.

So we perform the workaround I mentioned in my previous reply, i.e. to run sonarqube quality for the older commit again in that microservices repo. But this isn’t sustainable as there are 1000+ repositories, so I need a orgaisation level standard housekeeping rule that ca help mitigate this problem.

Hope this clears it up so you can check on the 3 questions I listed in my previous reply.

Thank you for taking an interest in this problem statement.

Hi @Akash_Dwivedi
As far as I’m concerned, quality control is a matter for continuous integration, not continuous deployment. In a shift-left approach, if the microservice doesn’t pass the quality gate, it doesn’t even get published to the binaries manager. It’s as if it hasn’t been compiled.

Assuming you’re using GitFlow, only feature branches that pass the quality gate are mergeable and releasable.

SonarQube offers many ways to optimise analysis time and performance, based on new code detection and branches management. It is fully compatible with GitFlow.

Finally, working at commit level on thousands of microservices must be quite a challenge. I think you can feel it.

As you think about your Gitflow and release management, you will find that using out-of-the-box features will make your life easier without compromising your goals.