NullPointerException in org.sonar.db.component.ComponentDto.getKey() on API call

Problem description:
When executing an issue search with a page size above a certain point we get a NullPointerException in org.sonar.db.component.ComponentDto.getKey(). Currently this NPE occurs when we set the pagesize on 378 or greater.

When we set the pagesize to 100 and request all the available pages by adding and incrementing the ‘p’ parameter, the NPE does not occur and we can succesfully retrieve all issues.

ERROR web[AY7Q2g5GimswzYcvAAG0][o.s.s.w.WebServiceEngine] Fail to process request http://.../api/issues/search?severeties=BLOCKER,CRITICAL,MAJOR&statuses=OPEN,REOPENED&types=VULNERABILITY&ps=500
java.lang.NullPointerException: Cannot invoke "org.sonar.db.component.ComponentDto.getKey()" because "component" is null
        at org.sonar.server.issue.ws.SearchResponseFormat.addMandatoryFieldsToIssueBuilder(SearchResponseFormat.java:165)
        at org.sonar.server.issue.ws.SearchResponseFormat.createIssue(SearchResponseFormat.java:155)
        at org.sonar.server.issue.ws.SearchResponseFormat.lambda$createIssues$0(SearchResponseFormat.java:149)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
        at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
        at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
        at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
        at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
        at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
        at org.sonar.server.issue.ws.SearchResponseFormat.createIssues(SearchResponseFormat.java:150)
        at org.sonar.server.issue.ws.SearchResponseFormat.formatSearch(SearchResponseFormat.java:105)
        at org.sonar.server.issue.ws.SearchAction.doHandle(SearchAction.java:454)
        at org.sonar.server.issue.ws.SearchAction.handle(SearchAction.java:412)
        at org.sonar.server.ws.WebServiceEngine.execute(WebServiceEngine.java:111)
        at org.sonar.server.platform.web.WebServiceFilter.doFilter(WebServiceFilter.java:84)
        at org.sonar.server.platform.web.MasterServletFilter$GodFilterChain.doFilter(MasterServletFilter.java:153)
        at org.sonar.server.platform.web.SonarLintConnectionFilter.doFilter(SonarLintConnectionFilter.java:66)
        at org.sonar.server.platform.web.MasterServletFilter$GodFilterChain.doFilter(MasterServletFilter.java:153)
        at org.sonar.server.platform.web.MasterServletFilter.doFilter(MasterServletFilter.java:116)
        at jdk.internal.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:568)

Instance information:

  • SonarQube CE 9.9.4.87374
  • we also tested and can confirm that the NPE is also raised in version 10.5.0

We checked our database and found some issues without a matching component. After removing these issues and recreating the Elasticsearch indexes the error still occurs.

Does anyone has thoughts on how to solve or further investigate this problem?

Hi,

Welcome to the community and thanks for this report!

Can you provide the API call, please so we can test/reproduce with the same conditions?

 
Thx,
Ann

Hello @carlovollebregt, I tried to reproduce locally without success.
As you already pointed out, this is really related to the dataset and the fact that you found issues without a matching component is the real cause of this NPE.
I am not sure how your DB ended up in this inconsistent state, and I am not sure also about how it is possible that the API endpoint returns everything correctly by lowering the page size.
Did you manage to retrieve the full dataset in that case?
On my side I will came up with a dataset with the same tipe of “consistency issues” and see if I manage to go back to a clean state.

1 Like

Hi Matteo, thanks a lot for reaching out and for trying to reproduce our issue.

Did you manage to retrieve the full dataset in that case?

This is an interesting question. I was in the impression that we did get the full dataset when using a lower pagination size, because we could get past the pagesize number that raised the error. But looking at the responses we might not get the full response at all.

The first reponse we get with a page size set to the default value of 100 are starting with this summary:

{"total":1561,"p":1,"ps":100,"paging":{"pageIndex":1,"pageSize":100,"total":1561

That makes me think I would be able to retrieve 16 pages of issues, of which the first 15 hold 100 issues and the last one 61.

But counting for the term ‘“keys”’ in the output I seem to get:

  • 148 issues on page 1 (which is strange as I am using p=1&ps=100 as parameters, which is also reflected in the summary which you see above)
  • 87 issues on page 2
  • only 5 on page 3
  • 132 on page 4
  • 121 on page 5
  • 82 on page 6
  • 0 on page 7 and further (tried up to page 20)

Which makes a total of 575 issues. This is more than the 378 we could request when we raised the pagesize until we get an error. And using the pagesize of 100 we don’t get an error at all, we just get normal responses without issues.

Maybe I am wrong in thinking that each issue in the output should have one and only one ‘key’ field, but a quick glance makes me believe it should?

Hi Matteo_Mara, not meaning to rush you, but I am curious if you did succeed in creating a dataset with the same type of consistency issues we are experiencing. And also if the additional information about retrieving the full dataset makes sense to you.

Hello @carlovollebregt, I failed to create something that has the same behavior as your instance.
But I will try something a bit more drastic.

Do you have an idea what operation could have invalidated the integrity of your Database? Were components deleted manually from there?

I think I got something similar to your case, except for the null-pointer exception.

If I remove some components and then I rerun the search I get the total number of issues that should match, but then out of the 4 results I get only one in the last page 1 sized (The other 3 pages are empty).

The numbers or the key field you reported are taking into account also the components being shown in the response. So it feels like every pages is displaying less than 100 issues, and only the ones that are retrieved with no issues.

I still have to understand how to trigger the null pointer with a big page size, with my dataset it returns silently even if the DB is partially corrupted.

Hi Matteo, thanks for the update and your efforts on trying to reproduce the exception we get.

To respond on your previous question: we do not know which operation has started the exceptions. Our SonarQube instance is running for quite some years and we:

  • did remove inactive projects over the years
  • added and removed some rules
  • have switched some projects from Sonar Way quality gates to own quality gates and some of them quite some time later back to the Sonar Way quality gates

Could this last action, switching from one to the other quality gate and back, be a potential reason?