Out Of Memor after updateing 9.9 -> 10.8

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube Server / Community Build, Scanner, Plugin, and any relevant extension)
  • SonarQube 10.8 from zip
  • what are you trying to achieve
    running the scanner CLI and/or gradle plugin
  • what have you tried so far to achieve this
    Had already increased the the Xmx bexause otherwise the DB migrator/updater would failed
    sonar.web.javaOpts=-Xmx4G -Xms128m -XX:+HeapDumpOnOutOfMemoryError
    sonar.ce.javaOpts=-Xmx5G -Xms128m -XX:+HeapDumpOnOutOfMemoryError

But this won’t go away:

Processing of request /api/project_branches/list?project=[...] failed
java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.Arrays.copyOf(Arrays.java:3537)
        at java.base/java.lang.String.<init>(String.java:574)
        at org.postgresql.core.Encoding.decode(Encoding.java:284)
        at org.postgresql.core.Encoding.decode(Encoding.java:295)
        at org.postgresql.jdbc.PgResultSet.getString(PgResultSet.java:2436)
        at org.postgresql.jdbc.PgResultSet.getString(PgResultSet.java:2930)
        at com.zaxxer.hikari.pool.HikariProxyResultSet.getString(HikariProxyResultSet.java)
        at org.apache.ibatis.type.StringTypeHandler.getNullableResult(StringTypeHandler.java:36)
        at org.apache.ibatis.type.StringTypeHandler.getNullableResult(StringTypeHandler.java:26)
        at org.apache.ibatis.type.BaseTypeHandler.getResult(BaseTypeHandler.java:86)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.applyAutomaticMappings(DefaultResultSetHandler.java:586)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.getRowValue(DefaultResultSetHandler.java:416)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValuesForSimpleResultMap(DefaultResultSetHandler.java:366)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValues(DefaultResultSetHandler.java:337)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSet(DefaultResultSetHandler.java:310)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSets(DefaultResultSetHandler.java:202)
        at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:66)
        at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:80)
        at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:62)
        at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:336)
        at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:158)
        at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:110)
        at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:90)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:154)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:147)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:142)
        at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:147)
        at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:80)
        at org.apache.ibatis.binding.MapperProxy$PlainMethodInvoker.invoke(MapperProxy.java:141)
        at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:86)
        at jdk.proxy2/jdk.proxy2.$Proxy146.selectByComponentUuids(Unknown Source)
        at org.sonar.db.measure.MeasureDao.lambda$selectByComponentUuidsAndMetricKeys$0(MeasureDao.java:102)

Hello @Ruby_Paasche,

Thank you for the report.
Does the project you are analyzing have a particularly large number of branches? Is this issue happening for a specific project or for all projects?
It would be interesting if you could monitor the JVM and see if you are already close to the limit before running the analysis.
Have you tried increasing the memory above 4G?

Yes in fact it has a very large number of branches, currently starting a test with 8G for CE.

The point is, it all worked with the default values of 9.9, so there seems to be some changes which increased the memory consumption. (not mentioned in the docs)

The failure is happening on a web API endpoint, so it’s the web process memory that needs to be increased.

Yes, right saw it after the first test and increased web to 8 instead.

But now the CE memorz seemd to low

2024.12.05 11:38:48 ERROR ce[2464067f-af97-4577-b154-e8f6730bbdad][o.s.c.t.CeWorkerImpl] Failed to execute task 2464067f-af97-4577-b154-e8f6730bbdad
java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.Arrays.copyOf(Arrays.java:3537)
        at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:228)
        at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:582)
        at java.base/java.lang.StringBuilder.append(StringBuilder.java:179)
        at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:180)
        at org.apache.ibatis.reflection.wrapper.BeanWrapper.set(BeanWrapper.java:61)
        at org.apache.ibatis.reflection.MetaObject.setValue(MetaObject.java:119)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.applyAutomaticMappings(DefaultResultSetHandler.java:592)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.getRowValue(DefaultResultSetHandler.java:416)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValuesForSimpleResultMap(DefaultResultSetHandler.java:366)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValues(DefaultResultSetHandler.java:337)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSet(DefaultResultSetHandler.java:310)
        at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSets(DefaultResultSetHandler.java:202)
        at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:66)
        at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:80)
        at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:62)
        at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:336)
        at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:158)
        at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:110)
        at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:90)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:154)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:147)
        at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:142)
        at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:147)
        at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:80)
        at org.apache.ibatis.binding.MapperProxy$PlainMethodInvoker.invoke(MapperProxy.java:141)
        at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:86)
        at jdk.proxy2/jdk.proxy2.$Proxy88.selectBranchMeasuresForProject(Unknown Source)
        at org.sonar.db.measure.MeasureDao.findNclocOfBiggestBranchForProject(MeasureDao.java:144)
        at org.sonar.ce.task.projectanalysis.step.ProjectNclocComputationStep.execute(ProjectNclocComputationStep.java:41)
        at org.sonar.ce.task.step.ComputationStepExecutor.executeStep(ComputationStepExecutor.java:79)
        at org.sonar.ce.task.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:70)

Indeed this is doing the same as the API endpoint, storing the measures of all the project’s branches in memory, so the memory should be increased for the CE process as well.

In 10.8, a change in the data model of measures was made, which results in better write performance and a much smaller table size. You can see more details on the ticket SONAR-22870. The trade-off is that more memory is used for read access.

Do you mind sharing how many branches this project has?

49 branches (some older which will be cleaned up in few days) and 21 PR. So it’s not that big I think.

Got it working with:

sonar.web.javaOpts=-Xmx8G -Xms128m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts=-Xmx8G -Xms5G -XX:+HeapDumpOnOutOfMemoryError
sonar.search.javaOpts=-Xmx2G -Xms2G -XX:MaxDirectMemorySize=256m -XX:+HeapDumpOnOutOfMemoryError

I checked our projects a little bit deeper, we have multiple projects with the dame amount of branches, but only npm (ts/jsx) projects had this problem somehow.

Glad to hear you got it working.
Indeed, 49 branches is not that much.
Would you be able to run this query on your database? It would help us understand why memory usage is particularly important for this project. You can send the results via private message.

select pb.kee, pb.branch_type, m.* from measures m
inner join project_branches pb on pb.uuid = m.component_uuid
inner join projects p on p.uuid = pb.project_uuid
where p.kee = '<your_project_key>';

Thanks!

It’s running for 5:03 mins without an empty result.

Replacing component_uuid with branch_uuid works:

select pb.kee, pb.branch_type, m.* from measures m
inner join project_branches pb on pb.uuid = m.branch_uuid
inner join projects p on p.uuid = pb.project_uuid
where p.kee = '<your_project_key>';

Receiving 18698 results.

We were able to get the query results in private (there was a query client issue due to the size of the results). Thanks again @Ruby_Paasche.
The large size of measures is due to the dependency-check community plugin, which loads a very large report within a measure (metric key report). In this case ~2 GB of data for 54 branches.

This is not how measures are intended to be used; we will discuss internally whether we should limit the size of a measure’s value. In the meantime, the only workaround is to remove the plugin or increase the JVM memory.

Hello @Ruby_Paasche,
The next major SQ release will include an improvement to reduce the number of measures loaded in memory simultaneously. More details are in the ticket.

1 Like

@eric.giffon

I’m glad to hear this. But to be sure with “next major SQ release” you mean 10.9 or a future 11.x?
(Hopefully 10.9 :slight_smile: )

Yes, I mean 10.9, not a patch release (like 10.8.2).

We could not fix this Issue in our Project (Monorepo with ~1.2M LoC (mainly Java))
We increased heap from default values to:
web: 48GB
ce: 32GB
And removed the DependencyChecker plugin

But nearly all scanes failed with OOM and the Web UI was not responding the relevant pages like Project Overview or Project Issues. Meaning Sonarquebe was not useable!
So we rolledback to 10.4 and hope this issue will be fixed in 10.9.

By the way, does someone know in which exact minor version this issue occurred?

The change to measures was introduced in 10.8.0.
Thank you for the report. Please try again once the next version is released and let us know if you still have issues.

1 Like

We still have this issue with 2025.1!
web: 48GB
ce: 32GB
It is a little bit better than 10.8 (now the Project Issues page responds after some time), but the rest is unchanged:

  • Project Overview never responses
  • nearly all scanes failed with OOM

So still not useable for us!

We upgraded from 10.4 to 10.7 where everything works just fine with unchanged memory settings:
web: 1GB
ce: 2GB

Hi,

This will be in the next point release of 2025.1, expected next week:

SONAR-24522 UI slowdowns when large measures have been persisted

 
HTH,
Ann

1 Like

Hello,
The 2025.1.1 release will also include a change in the upgrade (from 10.7 or less) that will prevent the migration of large measures, which are the reason for the OOM and slowdowns.

1 Like