Slow analysis after SonarQube upgrade

Hello,

we recently upgraded from SonarQube version 10.3 to 10.6. The SonarQube analysis took much longer that before upgrading.

There are huge variations in the “Project Analysis” duration.

Before upgrade the worst case was up to ~ 6 minutes, after upgrade 60 minutes.

It seems to take far longer time when analyzing a new branch.

We currently have no idea what to do. We tried to tweak the VM and VACUUMED the database.

Now we are downgrading back to 10.3 again.

We found this issue:
SonarQube interface is very slow after updating to 10.4.1 - SonarQube - Sonar Community (sonarsource.com)

Is there anything we can do?

Thanks, and kind regards
Armin Sczuka

Here some strange finding out of our SonarQube docker logs:

2024.07.03 15:56:54 INFO ce[6adffde4-fdbe-4061-8594-e349d4a61a68][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=575506 | status=SUCCESS | time=2276427ms
2024.07.03 15:56:55 INFO ce[6adffde4-fdbe-4061-8594-e349d4a61a68][o.s.c.t.s.ComputationStepExecutor] Persist duplication data | insertsOrUpdates=202 | status=SUCCESS | time=533ms
2024.07.03 15:56:55 INFO ce[6adffde4-fdbe-4061-8594-e349d4a61a68][o.s.c.t.s.ComputationStepExecutor] Persist new ad hoc Rules | status=SUCCESS | time=0ms

Persist live measure took 2.276.427ms :thinking:

Lines of code: 350k
Lines: 600k

Hey there.

What database are you using behind the scenes? Is it Postgres?

Good morning,

we are using postgres:16.2-alpine as docker container running on the same host as SonarQube.

I’d suggest you make sure you perform the VACUUM FULL ANALYZE (you mentioned VACUUM, but not ANALYZE).

We are currently moving the 10.6 installation onto a sperate (non-productive) VM in order be able to dig-down into the delays.

I will post further data when the migration is done.

Here our initial finding for the analysis time distribution between 10.3 and 10.6:

We will perform a “VACUUM FULL ANALYZE” for the Postgres.

Then we will try to swap the productive with the test instance for some days (probably next week) in order have more data.

Hi there. We did a similar thing. We re on MsSql Server. I checked index fragmentation and it was necessay to perform defrag. I did a job to do it every week. it helps a lot but we still have sometimes big lags on UI.
After 10.6 install, still same issue.

Hi all, same problem here after moving to 10.6

We are persisting a lot more live measures than before and it takes a lot. The live_measures table is 18GB in size.

2024.07.31 07:07:46 INFO  ce[97073282-63ce-4df0-9aa3-7c7ae32f6bbf][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=51276 | status=SUCCESS | time=12554ms
2024.07.31 07:09:00 INFO  ce[8be70975-7e87-4129-a368-063a271a28c1][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=68805 | status=SUCCESS | time=18748ms
2024.07.31 07:14:21 INFO  ce[630d9792-2b93-4ddd-83ce-8301ed5adac2][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=94270 | status=SUCCESS | time=294199ms
2024.07.31 07:15:21 INFO  ce[af166fb6-d2d5-49ec-816c-bbb090726887][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=10635 | status=SUCCESS | time=5003ms
2024.07.31 07:16:56 INFO  ce[a2a1aa2e-f7eb-4c14-ac0d-20db552d6c70][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=75338 | status=SUCCESS | time=79511ms
2024.07.31 07:17:38 INFO  ce[75de9cc1-a0c8-4443-8dce-1db2fc705bc5][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=75338 | status=SUCCESS | time=20396ms
2024.07.31 07:18:18 INFO  ce[eda3db76-caea-4602-a41b-d7b6e81f11e8][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=12033 | status=SUCCESS | time=24066ms
2024.07.31 07:19:14 INFO  ce[d413238f-4089-4d13-bb63-a4e0d2a3add2][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=12027 | status=SUCCESS | time=37082ms
2024.07.31 07:22:14 INFO  ce[237d2f2d-d914-4c1d-9ba6-8ecfd03abe3c][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=48696 | status=SUCCESS | time=146917ms
2024.07.31 07:22:58 INFO  ce[87133501-eb76-42df-b8a6-13410b03c47f][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=12131 | status=SUCCESS | time=5260ms
2024.07.31 07:24:27 INFO  ce[090fc164-79a6-4f87-937f-15c3850699fc][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=68753 | status=SUCCESS | time=72003ms
2024.07.31 07:24:58 INFO  ce[5df9d389-aaef-457e-9424-156021977093][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=8232 | status=SUCCESS | time=18598ms
2024.07.31 07:25:34 INFO  ce[a88659fb-bde7-4b13-a920-1b4c06a52cff][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=35871 | status=SUCCESS | time=20691ms
2024.07.31 07:25:46 INFO  ce[52d5372c-ef4c-4b9d-92e1-aac097cb62c4][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=24711 | status=SUCCESS | time=5391ms
2024.07.31 07:26:01 INFO  ce[2a576e9f-c5ce-4cc4-90b1-48a0fb149dee][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=24711 | status=SUCCESS | time=6704ms
2024.07.31 07:26:21 INFO  ce[00df7f28-eceb-438d-bf20-2689c2ac932b][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=19002 | status=SUCCESS | time=10635ms
2024.07.31 07:27:09 INFO  ce[b67a558b-e2e7-4794-bacb-41770b8a66ab][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=100347 | status=SUCCESS | time=23872ms
2024.07.31 07:31:23 INFO  ce[806f4e8b-904e-485e-bd6f-3061bda0d2fc][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=9852 | status=SUCCESS | time=2684ms
2024.07.31 07:32:30 INFO  ce[3a21a8a8-fcc5-44cf-88ac-4c0981dcff5e][o.s.c.t.s.ComputationStepExecutor] Persist live measures | insertsOrUpdates=8202 | status=SUCCESS | time=1746ms

Hey @justoaguilar

What version did you upgrade from, and what database backend are you using?

@sczuka Did your investigations turn up anything else?

Colleague of sczuka here, we now retestet 10.6.0 after VACUUM FULL ANALYZE, unfortunately to no avail, or maybe a tiny little bit. However it is still the case that analysis max time goes from around 15 min to 50 min and more, and in general it seems that the average analysis time for all builds and projects goes up by a factor of 3 to 10 :frowning_face:

We also retried the whole update going this time just from 10.3.0 to 10.4.1 and already encounter the problem there too. Again vacuuming did not really help.

So to sum it up a bit:

  • Sonarqube 10.3.0 plus postgres 16.2 or 16.3: okay
  • Sonarqube 10.4.1 plus postgres 16.3: unbearable slow
  • Sonarqube 10.6.0 plus postgres 16.2: unbearable slow

Hello Sonarqube team;

Any news on the topic, we are facing the same issue after upgrading from the LTS 9.9.5 to 10.6.0 .

Any workarround or solution ?

Thanks in advance

Hey @user_user @morco @justoaguilar @Cyril035

I don’t have any more advice right now. However, I am collecting a few different data points (across the community and our enterprise support channels) to provide a consolidated report of these reports to our PMs/Devs. Stay tuned.

1 Like

There is one more hint we can provide. After more testing we can now say the critical slow down also happens for version 10.4.0, so the breaking point seems to be the minor jump from 10.3 to 10.4.

Hey again @user_user @morco @justoaguilar @Cyril035

While I haven’t been able to reproduce your exact performance degradation, I did some investigation.

I decided to take a big project (~4000 files, 850k LoC, just a modified version of what’s in GitHub - SonarSource/sonar-scanning-examples: Shows how to use the Scanners), run my own tests, and measure the time for:

  • First analysis of the main branch
  • Second analysis of the main branch
  • First analysis of a non-main branch (after the first two analyses)

This was all done on brand new SQ servers (no upgrading) against a local Postgres server. Admittedly, a fairly optimal setup.

I chose these versions to compare the last LTA, to 10.3 (where we did a lot of refactoring of the DB model), to 10.4 (new measures as noted in SONAR-21949), and 10.7 (the latest)

These are the results:

Version 1st Scan 2nd Scan 1st Scan New Non-Main Branch
9.9.7 48s 39s 28s
10.3 49s 35s 70s
10.4 67s 50s 66s
10.7 58s 56s 97s

It definitely looks like something we should investigate, and I’ve passed this on to the right folks.

I would be curious to know if your performance issues also start only at 10.3/10.4 (@morco made clear this is the case), but I understand that it’s quite some work to reproduce.

I’ll keep you posted if something comes out of it, on top of the performance improvements we already expect in 10.8 with SONAR-22870.

4 Likes

We are also running into this issue after upgrading SonarQube to 10.6 and are waiting on a fix.

@Colin Do you have any idea when we can expect these performance improvements? I see the ticket you linked is flagged as “Done,” but other posts mention we shouldn’t expect 10.8 until sometime after January 2025.

At the moment, our largest project has a 3-30 hour build queue, which is frankly unacceptable.

Hey @chrism

The release is expected in 2-3 weeks (first week of December), not January 2025.

I did re-run my test with a pre-release build of 10.8

Version 1st Scan 2nd Scan 1st Scan New Non-Main Branch
10.7 58s 43s 79s
10.8 (97674) 35s 28s 69s

It seems we definitely captured some performance back, although there is still something to investigate (why did it drop in the first place?).

1 Like

@Colin That is some excellent news! I am also curious to hear why the degradation occurred, as well as how we’re going to make sure it doesn’t happen again!

@Colin I see that 10.8 dropped on Dec 1st but the release notes make no mention of any perf fixes. We’re going to test this out in stage anyway, but curious why it wouldn’t make it into the list

Hi,

we did tests with Community Build 24.12.0.100206 and see also no performance improvement. It’s still on same level as SQ 10.7.

Regards,
Günter