Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[adapter] use an unconsolidated snapshot in table bootstrapping #31314

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ParkMyCar
Copy link
Member

Pared down version of #31300

When the Coordinator starts we bootstrap tables by first retracting their entire contents so we have a clean slate, and then re-emitting all of the updates. We've seen in some recent releases large memory spikes on Coordinator startup related to creating a consolidated snapshot, and I believe it's this table bootstrapping that is causing the issue.

We don't need a consolidated snapshot to retract the contents, thus this API uses Persist's snapshot_and_stream API which should use a smaller bound of memory, specifically it keeps one Part in memory at a time, and parts are capped at approximately 128MiB.

Motivation

Fix memory spike in envd startup

Checklist

  • This PR has adequate test coverage / QA involvement has been duly considered. (trigger-ci for additional test/nightly runs)
  • This PR has an associated up-to-date design doc, is a design doc (template), or is sufficiently small to not require a design.
  • If this PR evolves an existing $T ⇔ Proto$T mapping (possibly in a backwards-incompatible way), then it is tagged with a T-proto label.
  • If this PR will require changes to cloud orchestration or tests, there is a companion cloud PR to account for those changes that is tagged with the release-blocker label (example).
  • If this PR includes major user-facing behavior changes, I have pinged the relevant PM to schedule a changelog post.

* add Persist's snapshot_and_stream API to storage_collections
* use it in the Coordinator's bootstrap_tables method to reduce memory usage
@ParkMyCar ParkMyCar requested review from a team as code owners February 6, 2025 20:30
@ParkMyCar ParkMyCar requested a review from aljoscha February 6, 2025 20:30
@bkirwi
Copy link
Contributor

bkirwi commented Feb 6, 2025

At a first blush I'd expect this to make memory use worse, not better!

This may only fetch one part file at once... but it keeps the entire unconsolidated decoded snapshot in memory, whereas the previous version kept only the consolidated data. For most shards this seems like a bad tradeoff: if the unconsolidated data is big enough to cause a lot of memory use in the consolidator, it seems likely to cause about as much stored as a vec of rows. And the vec of rows can't benefit from lgalloc...

Have we seen this improve memory use in practice?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants