Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Stream sbom to disk (avoiding large memory footprint and OOMs) #3263

Open
HairyMike opened this issue Sep 23, 2024 · 5 comments
Open
Milestone

Comments

@HairyMike
Copy link

HairyMike commented Sep 23, 2024

What would you like to be added:
Currently, Syft builds the sbom report in memory before writing it to disk. I propose that instead of building in memory, we stream directly to disk.

Why is this needed:
To avoid OOMs

Additional context:
SBOM generation:
https://github.com/anchore/syft/blob/main/cmd/syft/internal/commands/scan.go#L199
https://github.com/anchore/syft/blob/main/internal/task/package_task_factory.go#L116
https://github.com/anchore/syft/blob/main/syft/create_sbom.go#L66

Report generation:
https://github.com/anchore/syft/blob/main/cmd/syft/internal/commands/scan.go#L208

@HairyMike HairyMike added the enhancement New feature or request label Sep 23, 2024
@spiffcs
Copy link
Contributor

spiffcs commented Sep 23, 2024

👋 thanks for the issue @HairyMike --> We're trying to understand where the stress points are that bring up this suggestion.

Are you dealing with SBOMs with large amounts of packages?
Are the contents of those packages so large that you're running OOM?
Are any specific formats you're trying to create that are causing memory issues on your machine?

Do you have an image we could use to start tinkering with what something like this could look like? There are a lot of complexities we could assume when trying to organize this solution and some of them might not actually solve the problem that was encountered.

@wagoodman
Copy link
Contributor

I don't see a way to stream the format agnostically as we find packages/relationships. But I image we could spool results to a sqlite file on disk and have an SBOM object backed by this sqlite DB to drive how we format the final SBOM. This, however, could run into the same OOM issue depending on the nature of the image.

@HairyMike
Copy link
Author

HairyMike commented Sep 23, 2024

Thanks for the replies @spiffcs and @wagoodman.

This is related to scans of large disks that contain a lot of files. For example - scanning disks attached to a CI machine like Jenkins (where scans can take many hours) can lead to OOMs if the machine doing the scanning doesn't have enough memory.

A few figures from a scan we attempted: on a 250Gb CI node we found 1.5M packages and consumed ~14Gb memory. Scan time was ~15 hours on a m7i-flex.xlarge

One way around it is to break up the scans into smaller chunks and produce multiple sboms, but I think streaming directly onto disk would avoid the need for manual chunking / or high scanner memory.

I don't currently have an image that can be used to reproduce this, although may be able to provide one later.

My hope is that theres a way to incrementally create the sbom during that scan, that would mean the size of the scan target would be limited by disk size (cheap) as opposed to memory (expensive).

@wagoodman
Copy link
Contributor

That's a lot of packages! Would you be willing to post a pprof profile for us to take a look at? This can be produced with:

export SYFT_DEV_PROFILE=mem

We're aware of a few memory adjustments to make based on anchore/stereoscope#233 , but I'm interested in your specific profile to see if we have other findings here.

@wagoodman
Copy link
Contributor

The team chatted about this one today and came to a few conclusions:

The changes we think we'd make to the system are:

  • pass an SBOM writer object to catalogers, don't have them return slices of packages/relationships. This is both less brittle from and API standpoint, but also allows for a writer interface where we can swap out the implementation of such a writer (say one that spools to disk).
  • when spooling out intermediate results there are a few different ways to do this, such as sqlite or protocol buffers -- we should explore more options here. But the first point is the larger one: make a facade for these so we can safely swap these out without exposing them on the public API.
  • The sbom.SBOM object would also need to be an interface, not a struct with data. This drives a lot of the behavior downstream, so is a very impactful change. This would be akin to the v1.Image and v1.Layer interfaces in the GGCR lib: they are interfaces that represent data, but have different implementations to handle fetching and transforming the data.
  • It would be fun to additionally have a -o sqlite option in syft
  • One memory pressure today (on top of the stereoscope one mentioned earlier) is that relationships get copies of objects, not references to objects. We can't change this behavior until v2 also, since we have folks that type assert information out of relationship objects.

@wagoodman wagoodman added this to the Syft 2.0 milestone Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

6 participants