Discrepancies when gathering data via /export_findings and /issues/search API endpoints

Greetings,

There are discrepancies in fields returned between the two cumulative issue API endpoints - Enterprise edition’s /export_findings non-paginated, single-page dump and Community(?) edition’s /issues/search pagination, multi-page dump.

Firstly, for my same sample project, /export_findings has 20,579 unique issues (as assumed by counting the "key": field in its response. Compare this to the same project and branch but against /issues/search, and I see the total number of issues on the first page listed as 17690. Is there some form of flattening of duplicate or very similar issues going on in the latter API and not the first?

Regarding field provision between the two. /export_findings provides the comments field, which /issues/search does not. /issues/search provides hash, textRange, flows, debt, author, quickFixAvailable, and messageFormattings, which /export_findings does not. I thought that e.g. author might be present in a later release of the server’s API, but couldn’t see evidence in the changelog to indicate so: https://next.sonarqube.com/sonarqube/web_api/api/projects?query=export

Both endpoints return identical or similar data but under different field names e.g.

  • key and projectKey
  • scope and branch
  • line and lineNumber
  • rule and ruleReference

Why is this the case? Why aren’t the fields standardised between endpoints?

If /export_findings is an Enterprise-exclusive endpoint - why does it offer an inferior i.e. smaller set of data per issue?

Reluctantly, I decided to utilise /issues/search due to the increased amount of unique data available per issue, at the cost of paginated requests which for one sample project meant a total of 36 API requests to list the 17690 issues with parameter &ps=500. However, this API is limited to the first 10000 issues, making it useless for this large issue use case. The error message reads Can return only the first 10000 results. 17500th result asked. from https://sonarqube.test.server/api/issues/search?componentKeys=my-test&ps=500&p=35. Is this intended functionality? How am I, as an Enterprise user, supposed to obtain this already-available data in a cohesive and obtainable manner?

I have copied an issue from each API endpoint below, of which the issues belong to a .zip of https://github.com/gabrielfalcao/lettuce.

/api/issues/search/?componentKeys=my-test&branch=main&ps=500&p=2

{
            "key": "AZby-oGulxEnMqGhq7lO",
            "rule": "yaml:DocumentStartCheck",
            "severity": "MAJOR",
            "component": "my-test:1/.markment.yml",
            "project": "my-test",
            "line": 1,
            "hash": "57d1da85cfa2dfe8ac0236f828430f97",
            "textRange": {
                "startLine": 1,
                "endLine": 1,
                "startOffset": 0,
                "endOffset": 8
            },
            "flows": [   ],
            "status": "OPEN",
            "message": "missing document start \"---\" (document-start)",
            "effort": "2min",
            "debt": "2min",
            "author": "",
            "tags": [
                "convention"
            ],
            "creationDate": "2025-05-21T13:23:31+0100",
            "updateDate": "2025-05-21T13:23:31+0100",
            "type": "CODE_SMELL",
            "scope": "MAIN",
            "quickFixAvailable": false,
            "messageFormattings": [  ]
},

vs.

api/projects/export_findings?project=my-test&branch=main

{
           "key": "AZby-oGulxEnMqGhq7lO",
           "projectKey": "my-test",
           "branch": "main",
           "path": "1/.markment.yml",
           "lineNumber": "1",
           "message": "missing document start \"---\" (document-start)",
           "status": "OPEN",
           "createdAt": "2025-05-21T13:23:31+0100",
           "updatedAt": "2025-05-21T13:23:31+0100",
           "ruleReference": "yaml:DocumentStartCheck",
           "comments": [  ],
           "type": "CODE_SMELL",
           "severity": "MAJOR",
           "effort": "2",
           "tags": "convention"
},

Must-share information (formatted with Markdown):

  • SonarQube LTS 9.9.
  • Deployed standalone from .zip.
  • Trying to achieve most efficient and comprehensive data retrieval.
  • Have tried the two endpoints above.
1 Like

Hey there.

Thanks for all this feedback and the great questions.

The GET api/issues/search endpoint does not include Security Hotspots (security-sensitive code locations requiring manual review); these can be fetched separately using another API (GET api/hotspots/search). In contrast, GET api/project/export_findings includes both issues and hotspots in its output.

There’s also a few bugs that have been fixed since 9.9 – including SONAR-20572 (not sure if this affected 9.9 or started in 10.0) and SONAR-22208.

That’s a fair point. I don’t believe standardization between endpoints was a design goal when this API were developed, which leads to inconsistencies in field names. Any changes now would have to be backwards compatible.

Another important distinction is in how the endpoints retrieve data: GET api/issues/search uses Elasticsearch, which enforces a 10,000-result limit and requires pagination. By contrast, GET api/projects/export_findings accesses the database directly and can return all results at once.

If you need detailed per-issue data and can keep queries below 10,000 results—such as by filtering by creation date, issue type, or other parameters—GET api/issues/search can work well. For larger exports that exceed this limit, only GET api/projects/export_findings is practical, even if the data is sparser.

The inconsistencies and limitations you’ve described are valid concerns, and I’ll be sure to pass this feedback along! I don’t think we’ve done much with this endpoint since first introducing it in 2021.

1 Like

Hi @dbeezt,

As Colin says, there are some aspects of the API that we want to explore to make it easier to consume. This feedback is going to be a huge help, so thank you!

Hi Colin,

Thanks for the in-depth and thoughtful response. We have since upgraded our SonarQube version and settled on the export_findings endpoint.