SonarQube will only run one job per project despite multiple workers

Must-share information (formatted with Markdown):

  • SonarQube 10.6
  • Docker
  • Run multiple jobs for the same project of different branches in parallel
  • The CE option that should enable this is turned on

Basically the same issue as is mentioned here 10.5 Enterprise only analyses one tasks at a time despite multiple workers configured

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

Hi,

Parallel processing within a project is available but off by default.

Note that even once you enable it, there are still some limits on parallel processing branches from the same project (what if one thread is processing a main analysis while another one is processing a branch that uses main as reference?).

 
HTH,
Ann

Hi Ann,

The option to perform multiple runs in parallel is already enabled. Additionally, none of the builds being run are for the main branch. Currently, one PR build is being run for a branch, and four are queued for other branches on the same project. We have two workers sitting idle and not picking up the queued jobs.

Hi,

I just gave that as an illustration of why we don’t run branches in parallel with each other. Per the docs I linked to:

Once enabled, SonarQube can analyze one branch and several pull requests together at any given time.

So its expected that your branch analyses would be queued.

 
Ann

I’m not sure I understand, are you saying that SonarQube will not run analysis for different branches in parallel.

What will it run in parallel?

Hi,

CI-side you can analyze everything in parallel. (You knew this already, but for posterity’s sake…) On the server side, branches from the same project will not be processed in parallel. What will be processed in parallel is PRs from the same project and branches from different projects. So, these can all run at the same time:

projectA-main
projectB-main
projectC-develop
projectA-PR#34
projectA-PR#35
projectA-PR#36
projectB-PR#302
projectB-PR#303
projectB-PR#298
projectC-PR#9
&etc

Notice, we have one branch from each project and multiple PRs. Now, I doubt many people have enough CE workers to actually run all those^ in parallel, but if all of those things were queued at (approximately) the same time, there would be no idle workers.

What won’t run together is this:

projectA-main
projectA-develop
projectA-feature3
projectA-feature27
projectA-feature7
&etc

These^ will all run serially, no matter how many workers you have.

Does that help?

 
Ann

Yup, that helps. Thanks Ann

Hey Ann!
I work with Chris and the main issue that caused us to look into this is we are getting long queue times for scans. Its all for one project with many branches that are backing up as the other projects are able to take the other workers no problem. I am wondering if you have any ideas on how we could speed up our scans?

Just want to chime in with this post I made the other day. Basically:

  • We’re always trying to improve CE performance, we have some stuff in the pipe
  • Network/disk latency + DB performance are almost always the key indicators of of CE performance. Out of curiosity, what DB do you use?
1 Like

We are using Postgres via AWS Aurora

Thanks. I ask because many Postgres users find a speed-up after performing some regular maintenance.

Yeah, we ran that right after the upgrade and went from a 24 hour queue time to averaging 1-8 hours.

Today this project has a 4 hour wait time on builds, which is way too long. Does SQ itself collect any performance metrics on the CEs to help troubleshoot?

I’m working on aggregating metrics across projects to spot trends in build times since the upgrade, but anecdotally it looks like some of our build times are up more than 50%.

Probably your /logs/ce.log file will clue you into where time is being spent during the CE processing. If you can narrow it down to the specific project that is taking a while, extract those logs and say, run a grep to get all time=123ms with 6 digits

grep -E 'time=[0-9]{6,}ms' ce.log

Ultimately you might fiddle with this filter (swap out 6 for 5 or 7) based on what your find.