Sonarqube upgrade strategies

We are upgrading SQ (on amazon linux) 6.7.2 LTS -> 7.9.3 LTS
What would be your choice for SQ upgrade:

  1. parallel (get new servers for DB/app, perhaps restore db and app, do in place upgrade on the new servers, change URL to some new URL like sonarqubeprod7, change new projects to that, then slowly migrate other existing production projects to sonarqubeprod7 on their own timeline.
  2. in place - force every prod project to follow upgraded jenkins-shared-library (with new versions of maven sonar plugin, etc)
  3. parallel but have a phase2 to swap urls. For that ideally we’d “teach” new projects to incorporate a variable/parameter for the url so during the swap of sonarqubeprod7 to just old sonarqubeprod, those already using sonarqubeprod7 can be easily reverted to original generic sonarqubeprod URL.

We did inplace upgrade in dev, it is fine (after we upgraded test pipelines to use 3.7 version of maven sonar plugin, because current 3.2 has a bug where it reports incompatibility with SQ 7. But to do it in prod without a way to go easily back or “interim period” is a bit scary - we are a conservative bank which serves other member banks and to state the obvious users could not care less for which version we use, but very much care if production and/or projects are disrupted.

Any thoughts appreciated. Thanks in advance!

Hi Dan,

It sounds like you are going down the right path by practicing your upgrade in a test environment and preparing.

Regarding parallel systems, I’d avoid if possible. This is going to double your administration efforts and lead to a situation where you’re going to need to force a few stragglers to move eventually. Your #2 above is the recommended way to go.

One thing to consider is that if you have build breaking in place (ex. Jenkins plugin waitForQualityGate() ), you may want to temporarily disable to avoid unforeseen disruption to your pipelines.

Good luck on your upgrade!

Brian

1 Like

Thank you Brian @bcipollone !
We are leaning towards combined solution #3 for the following reasons:

  • It will gives us time to upgrade asynchronously - to-be-prod servers can be configured ahead of time, at least file system can be, we can upgrade DB ahead of time too and test entire system.
  • It gives us access to to-be-prod servers - hence we can tune up nofile, noproc, jvm parameters, etc etc ourselves or working with admins closely but without pressure, and then declare it live and remove access from ourselves
  • It lets us backout easily to the system we haven’t touched so we know it should still work
  • It allows us to set a deadline and schedule phase2 comfortable for all but yet well defined
  • It lets us handle one off exceptions by redirecting troubled projects back to old system even if they have to modify URL in the jenkins file
  • It makes it easier to get approvals
  • It doesn’t regress any projects from prod to dev (if we were to use dev as a interim or backout) and doesn’t violate any security policy or procedure
  • It sets precedent for further upgrades and encourages us to externalize such volatile strings as URLs and versions

Any issues with this approach?

Thank you!

Hi Dan,

It makes sense to plan for what works for the teams and organization you support. You’re testing the upgrade and planning the transition, so you’re on the right path. You’re also laying the groundwork for future upgrades to ensure the next one is smoother.

A problem with “parallel” instances is any analyses run against the “old” instance will not be reflected in the new one. Ensure your users are understanding of the gap during the transition.

Good luck on your upgrade.

Brian

1 Like

Thank you. And about the gap - is there a way to easily migrate the result of that gap’s (say a week) scans (scans done during the parallel)? And/or what is the best way to export the history so we can put it in our documentation, such as Confluence wiki for example for further references? Finally are the scans stored in file system on app side, database, or (split between) both locations?

Hi Dan,

All non-configuration (i.e. conf/sonar.properties) data is stored in the database. This includes scan results.

When you run a scan it is stored in the instance of the database indicated by the sonar.host.url parameter. There is no way to move them to another instance other than running them again.

In your case the scans performed on the old instance during the gap would be lost when you fully transition.

Brian

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.