We are currently facing a critical issue with our SonarQube Enterprise Edition v10.3.0.82913. The background analysis tasks for projects are randomly taking an excessively long time, often exceeding 72 hours. As a result, we find ourselves having to restart the server to force cancel these tasks. Our Azure DevOps pipelines are impacted, resulting in timeouts and failures.
We have attempted to address this issue by increasing the number of workers to the maximum allowed (10), but unfortunately, we are still encountering the same prolonged processing times. Our team has diligently reviewed both the CE and web logs, but we were unable to identify any exceptions being thrown.
Welcome to the community!
You describe this as random, but I wonder if there’s any pattern to be noted with
- project size
- time of day
The docs include some things to consider when tuning for performance:
Increasing the number of workers will increase the stress on the resources consumed by the CE. Those resources are:
- the DB.
- disk I/O.
- the network.
Of those, only the last two are internal to the CE.
If slowness comes from any of the external resources (DB, disk I/O, network), then increasing the number of workers could actually slow the processing of individual reports (think of two people trying to go through a door at the same time).
I would probably back down from having 10 workers configured (how many are active simultaneously?), and look at the other factors.
Thank you for your response.
We’ve scanned the logs for failures but found no discernible patterns (in web and ce logs). Can you specify where we should focus within the logs? While contemplating switching to trace logging, we’re concerned about potential performance impact and excessive logs. However, if errors exist, shouldn’t they be captured even in the info logging mode?
Our project experiences intermittent success and failure. We’ve diligently followed the referenced documentation regarding the CEs, all followed except for the disk as it is not SSD. With a daily analysis load of around 1,000, reducing workers may prolong background tasks.
The core issue surfaces when analysis tasks get stuck in progress, impacting SonarQube’s quality gate status without reflecting in Microsoft Azure DevOps Server 2022.
First, did you consider the factors I listed in my previous reply at all?
I’m not asking for patterns in logs, but patterns in occurrences.
Regarding logs, I would not go to
TRACE, but IIRC,
DEBUG will get you SQL statements in the logs. That could allow you to pinpoint precisely where in the process things slow down. Which brings me to the database. Have you check its performance? Have you checked the network between SonarQube and the DB?
And again, I would back off from having 10 workers configured unless you have a very big server under SonarQube.