SBOM import : Error from dependency analysis service: 400

Setup

We have a large monorepo and generate individal cyclonedx sboms using several tools, then merge them all into a single hierachical sboms cyclonedx-cli merge --hierarchical, which we pass to sonar.sca.sbomImportPaths.

Error

Dependency analysis failed Learn more about dependency analysis opens in new tab

Error from dependency analysis service: 400. If needed,  
contact your Sonar support rep and include the following information: acf6f606-dc46-43c5-94e4-36001cb1ac54

Root cause analysis

The 400 bad request is likely in fact a payload too large error.
In the monorepo we generate many sboms.
On release candidates it also generate images for both linux/arm64 and linux/amd64.

Platform config Image SBOMs Source sboms Total individual SBOMs
amd64 only ~13 +2 ~15
amd64+arm64 ~26 +2 ~28
Branch RELEASE_CANDIDATE platform container SBOM deps merged
SBOM size
SCA
master false amd64 only 5,798 20MB OK
a false amd64 only 5,714 20MB OK
b true amd64+arm64 ~11,400? ~35MB? 400
c true amd64+arm64 11,437 35MB 400

So it seem there is likely a payload limit that is either a bug or intentional limit somewhere between 20 MB and 35 MB.

Questions

  1. Is there a documented payload size limit?
  2. Although less convenient, would splitting the SBOM into two files (e.g., one per platform) work around this limit?
  3. If this is an intentional limit, are there any plans to increase it or provide a clearer error message (e.g., 413 Payload Too Large )?

Any help appreciated, ty!

The current size limit is 30MB (uncompressed) per file, and (EDIT) 10 MB (compressed) per archive. I do admit it’s not currently in the documentation. We are always working to improve our error messaging where possible.

I would ask: why is it important to have one project that contains multiple architectures and containers? Why would you remediate at that level rather than at the level of an individual application or artifact that a developer would track?

Why would you remediate at that level rather than at the level of an individual application or artifact that a developer would track?

I may have explained the context poorly, so let me clarify: It is a single monorepo that contains multiple ecosystems, backends (Java), frontend (npm) code, along with Dockerfiles, Helm charts, and related assets. They are all part of a single application.

From that single git repo, in a pipeline we build multiple artefacts:

  • A non containerized delivery .zip (java and npm ecosystems)
  • A containerized one composed of 12 to 13 container images, the server, web client, worker, scheduler, and others. (java + npm + os (debian, rpm, gobinary) )

For us, generating a single merged SBOM containing all dependencies is much simpler than managing 28 separate SBOMs.
We still attach SBOMs to their respective images, but having one repository-level SBOM remains convenient for internal use. (comparison tooling, dtrack, etc)

It is also needed for Dependency-Track since it currently does not support multiple SBOMs per project:


SonarCloud also appears to generate a single SBOM per project.
It seems reasonable to think that the dependency service would be able to parse what SonarCloud produces, even if SonarCloud itself has no real need to consume what it generates.

More generally, IMHO supporting larger SBOMs for large monorepos would make sense?

Please let me know if anything is unclear or if you have any questions, I’ll be happy to answer.


One final point of clarification regarding limit of 30 MB uncompressed per file: does this mean SonarCloud can process, let’s say 3 separate SBOMs each under 30 MB ?

-Dsonar.sca.sbomImportPaths=a.cdx.json,b.cdx.json,c.cdx.json

  • a: merged frontend and backend ecosystems (npm + java)
  • b: merged amd64 images
  • c: merged arm64 images

Is my understanding correct?

Thank you for your time.

More generally, IMHO supporting larger SBOMs for large monorepos would make sense?

Yes; we can always bump the limit, but there will still be one for service stability & reliability reasons. Also, for SonarQube Server where this is done as a request from the Server instance to Sonar’s Cloud, there is a point at which accepting larger files could cause the processing to hit some timeouts along the way.

I’ll look at whether the limit can be bumped (and get whatever it is/will be into the documentation.)

One final point of clarification regarding limit of 30 MB uncompressed per file: does this mean SonarCloud can process, let’s say 3 separate SBOMs each under 30 MB ?

Yes, you can pass multiple files as noted. The limit there is that the final, compressed, dependency-file.tar.xz file cannot be over 10MB; that is not a limit that can be changed at this time.

1 Like

We’ve updated the individual size limit to 50mb, and there is a documentation update in progress.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.