Post Sonarqube Migration Reindex taking +20hrs


We are migrating a large production database, roughly 640gb using pgloader, from AWS RDS Mysql to AWS Aurora Postgresql. This process takes 8.5 hours, and before the instance will resolve it performs an Elasticsearch re-index that is still ongoing at +20 hours and expands to store over 100gb on disk. My team wanted to reach out too the community to ask for advice.

  1. Is there an easy way to shrink our database using the Sonar Admin panel options? It looks like the majority of the space is packed into just four tables: file_sources, project_measures, issues and projects.
  2. Is there some way to skip this initial re-index or move it into the background so we can limit our outage window?
  3. Alternatively, if there isn’t a way for us to skip the index, is there some way we can greatly improve performance?

Sonar v6.7.5, (We are upgrading to latest LTS after this migration)
Instance Type: c5.9xlarge: 36 vcpu, 72gb memory
Database Type: db.r5.2xlarge


Welcome to the community. I can’t answer all your questions, but I do have some answers on specific points:

We just did exactly that: move the indexation to the background to limit the outage. That’s included from SonarQube 8.4 that just released. Here is the MMF with the history and details.

The bottleneck for indexation speed is almost entirely dependant on CPU speed + disk I/O. c5.9xlarge looks like a powerful CPU instance, so I would guess that the limitation is on the disk I/O. Some users use a dedicated disk with faster I/O mounted to the sonarqube/data/ folder, which holds the elasticsearch indexes.

Just to clarify your situation: your plan here is to move from MySQL to Aurora PostgresSQL as a first step, and then to upgrade to the latest LTS in a second time? What is the time window between the two steps?

Hi Pierre,

Thanks for getting back to us so fast, I had to confer with my team based on your suggestion. I am going to try increasing out IOPs greatly to see if that helps improve index time. Additionally currently our plan is to migrate to PostgreSQL for the increased performance, then upgrade during the next startup, back-to-back.

Can we upgrade directly from 6.7.5 to 8.4? Based on the documentation it seems like we would have to go to 7.9.1 LTS either way:

Thanks for your help so far,

You need indeed to follow the upgrade path 6.7 -> 7.9 -> 8.4. But when upgrading to 7.9, you don’t need to rebuild the elasticsearch to proceed with the 8.4 upgrade.

So i would suggest:

  • migrate to PostgreSQL
  • upgrade to 7.9.3, and stop sonarqube as soon as the database migration is done
  • upgrade to 8.4, and benefit from the asynchronous index rebuild

Please note that we will release a 8.4.1 to fix some migration bug, so you should migrate to 8.4.1 directly and not 8.4.

Keep me posted about the performance impact of the I/O change.

Hi again Pierre,

We started work on our 7.9 LTS portion of the upgrade, and it seems like the database upgrade taking place has run for over 5 hours in a test database instance that is only 6.5gb. Is it known if this process scales with the size of the database or would it be expected to take this long regardless of the size of the data? If runtime scales with the data size then this could be a larger issue than even our re-indexing.


It’s expected that the db-migration time scales with the database size. From what i remember, our migration benchmark gave an estimated time of 14h for a 500Gb dataset.

Hi Pierre,

Can you tell us more about your experience with Migrating/Upgrading that 500gb database? How big were your provisioned instances and what was your Sonar configuration? I ask because currently our 7gb database is still running it’s migration/upgrade at 14hrs.

We also noticed this JIRA ticket which you seem to have been the main driver of, can we ask you some questions about it?

  1. Your comment says the data utilized 5.8k projects, how was this number derived? Based on API calls our production instance looks too have 78k, which I find unbelievable.
  2. During our non-prod upgrade testing we are not seeing IO, memory or CPU being utilized heavily on the Sonar instance. Is this because all the work is performed on the database? And if so did you have any database tweaks that sped up this process noticeably?

Thanks again,
Brandon and Team

We used aws m5.xlarge ec2 instance for SonarQube, and db.m5.xlarge for database, on the same availability zone to reduce network latency. Standard configuration for SQ, out of the box from the zip.

That seems a lot, given your dataset size. Are your database and SQ instance on the same & fast network? How many issues in total do you got?

What does find unbelievable, your number of projects? What API call did you make to come to that number? You should just browse /about with an admin account: the number shown here is “how many projects I can see”. So as an admin, you see them all.

A note about project number: your database size will vary mostly with the number of files & issues. You could have 1 gigantic project with billions of issues, that would be a “big” database. Or 40k almost empty project, which would result in a small database.

The Database migration is intensive only for the database. The indexation following the migration is intensive for both SonarQube and the database, but more on the SonarQube side: CPU & IOPS on the /data/ folder. we typically used 3000 provisioned IOPS on this folder during our tests. Note that we used a SonarQube Data Center Edition for the indexation, that allow us to have 3 elasticsearch nodes to share the load of the indexation.

Yes we do have a recommendation for PostgreSQL: run a VACCUM operation before the migration. The documentation here advises this after the migration is done, but doing it before would help as well.

Thank you so much Pierre! Your information and assistance are extremely valuable to my team and I. I confirmed on that page that we have 78k projects and 99m issues in our production instance. How large our instances was what I did not believe. We have some strategies we are going to apply prior too migrating/upgrading and we will let you know how they go!