Sonar Scanner Version
SonarQube Version 9.9.4.87374
Postgres DB Version 13.1
Import of zip-File
ERROR: invalid byte sequence for encoding “UTF8”: 0x00
Which encoding does the database or table into which the file is imported require?
Is a zipped file really being imported or what is the functional process for importing?
Are there any general requirements for PostgreSQL?
The zip-File to be inserted in Sonarqube’s Postgres DB consists of single .pb files
Hey there.
Can you provide some more details about the error? I assume it’s appearing in your ce.log – can you provide the full stacktrace?
Thanks for answering Colin! I will tell my teammate to reproduce the error and provide the log because the last test was made some weeks ago and I cannot find the logs.
What I can provide now is the error we saw at the latest test:
Preformatted text
2024.10.14 15:22:24 ERROR web[AZDLnNk6h57v4Bn5Bhch][o.s.s.w.WebServiceEngine] Fail to process request http://xxx
at org.sonar.db.ce.CeTaskInputDao.insert(CeTaskInputDao.java:56)
at org.sonar.server.ce.queue.ReportSubmitter.submitReport(ReportSubmitter.java:182)
at org.sonar.server.ce.queue.ReportSubmitter.submit(ReportSubmitter.java:113)
at org.sonar.server.ce.ws.SubmitAction.handle(SubmitAction.java:100)
at org.sonar.server.ws.WebServiceEngine.execute(WebServiceEngine.java:111)
at org.sonar.server.platform.web.WebServiceFilter.doFilter(WebServiceFilter.java:84)
at org.sonar.server.platform.web.MasterServletFilter$GodFilterChain.doFilter(MasterServletFilter.java:153)
at org.sonar.server.platform.web.SonarLintConnectionFilter.doFilter(SonarLintConnectionFilter.java:66)
at org.sonar.server.platform.web.MasterServletFilter$GodFilterChain.doFilter(MasterServletFilter.java:153)
at org.sonar.server.platform.web.MasterServletFilter.doFilter(MasterServletFilter.java:116)
at jdk.internal.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.catalina.security.SecurityUtil.lambda$execute$0(SecurityUtil.java:222)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAsPrivileged(Subject.java:584)
at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:250)
at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:202)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:176)
at org.apache.catalina.core.ApplicationFilterChain.lambda$doFilter$0(ApplicationFilterChain.java:137)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:136)
at org.sonar.server.platform.web.UserSessionFilter.doFilter(UserSessionFilter.java:81)
at org.sonar.server.platform.web.UserSessionFilter.doFilter(UserSessionFilter.java:68)
at jdk.internal.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
…
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.postgresql.util.PSQLException: ERROR: out of memory
Detail: Failed on request of size 210250128 in memory context “PortalContext”.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.sonar.db.ce.CeTaskInputDao.insert(CeTaskInputDao.java:53)
… 146 common frames omitted
The teammates from database tell me the postgres database is able to insert a file of that size (report zip-file has 210 MB, unzipped 1,5GB)
When I watch the memory of sonar server during the test there is sufficient memory to pass the test successfully. The database team made a test on their own and told me it ist not possible to insert a zip-file in postgres-db. But it seems the standard procedure of the sonar scanner is to zip the file before uploading it to the sonarqube server.
2024.11.15 10:15:42 ERROR web[AZL7AaXzVUFVPPVmAQXo][o.s.s.w.WebServiceEngine] Fail to process request http://sonarxxxxxx/api/ce/submit?projectKey=xxx
java.lang.IllegalStateException: Fail to insert data of CE task AZMvG1ZrUof_HfjSgYsN
at org.sonar.db.ce.CeTaskInputDao.insert(CeTaskInputDao.java:56)
at org.sonar.server.ce.queue.ReportSubmitter.submitReport(ReportSubmitter.java:182)
at org.sonar.server.ce.queue.ReportSubmitter.submit(ReportSubmitter.java:113)
at org.sonar.server.ce.ws.SubmitAction.handle(SubmitAction.java:100)
...
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.postgresql.util.PSQLException: ERROR: out of memory
Detail: Failed on request of size 210250170 in memory context "MessageContext".
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.sonar.db.ce.CeTaskInputDao.insert(CeTaskInputDao.java:53)
... 146 common frames omitted
this is the latest test we made, the log is from web.log the ce.log hat no additional entry after the test
any idea about my logfiles Colin?
Thx
Wolfgang
Hey there.
You should ask your database team how much memory is available on the machine running Postgres. That appears to be where the bottleneck is. Most likely, it needs to be increased.
This is the answer they sent me:
I have already pointed out that the memory issue does not solve the problem. I successfully inserted a 300 MB file. The zipped file is the problem.
Hey there.
I really have a hard time believing you’re running into anything else other than an OS-memory limit.
A lot of online resources suggest that you may want to reduce the work_mem
of your Postgres database.
What is work_mem
and shared_buffers
set to on this postgres server?
Hello Colin!
These are the settings:
shared_buffers = 1GB
work_mem = 2621kB
thx
Wolfgang
That’s a pretty huge difference from the default of 128M
As documented
If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for
shared_buffers
is 25% of the memory in your system.
How much memory is on the system running Postgres, and how much of it is free memory?
Info from the db admins: The host has 2 GB of RAM and PostgreSQL does not take the memory “all at once”.
This was the report from db team:
You can easily load a file in the UTF-8 character set with a size of up to 1TB into the database. As soon as it is a zipped file, PostgreSQL no longer allows this (due to data type). So it’s a sonarqube problem.
PostgreSQL handles data effectively, as evidenced by the hundreds of thousands of SonarQube instances connected to it—this certainly isn’t an isolated case. However, PostgreSQL may process data differently when working with a zip file and inserting it into the database.
I recommend that they consider my suggestion and, at the very least, double the available RAM on the machine running PostgreSQL, and perhaps consider different values for shared_buffers
(1 GB exceeds the 25% value cited in the Postgres docs).
Thanks for quick answering. The sonar client that makes the upload to sonarqube instance does automatically compress the report, so to my opinion it should also work to insert the zip-file into the database, because it seems it is a given process not a self written case by the developers, do you agree?
I don’t really understand what you mean here.
I have no other advice at this point than to increase the memory on your Postgres server. If that doesn’t work, let’s talk some more.
I meant the sonar client the developer uses creates the report on windows OS, then automatically zips the file (so the developer does not provode this step as a self written process) and then the sonar client uploads the file to sonarqube database, so to my opinion it should fit together because both software products are sonarqube products.
sorry I meant sonar scanner software instead of sonar client