Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blockstore: Defer multi_get_bytes() allocation to caller #4674

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

steviez
Copy link

@steviez steviez commented Jan 28, 2025

Problem

multi_get_bytes() currently returns owned memory (Vec<u8>). But, the callers of multi_get_bytes() only require borrowed memory (&[u8]).

Summary of Changes

So, update multi_get_bytes() to return a reference to rocksdb owned memory; the caller can copy to owned memory if they require it

Future Work

Some similar work can likely be done with other methods that call .get_bytes(). For example, this one here:

agave/ledger/src/blockstore.rs

Lines 2266 to 2268 in e48672c

pub fn get_data_shred(&self, slot: Slot, index: u64) -> Result<Option<Vec<u8>>> {
let shred = self.data_shred_cf.get_bytes((slot, index))?;
let shred = shred.map(ShredData::resize_stored_shred).transpose();

However, the call to ShredData::resize_stored_shred requires owned memory, but is only applicable to legacy data shreds as far as I know. Additional rework beyond just changing the underlying rocksdb type will be necessary, so I plan to tackle those in a follow-on PR

multi_get_bytes() currently returns owned memory (Vec<u8>). But, the
callers of multi_get_bytes() only require borrowed memory (&[u8]).

So, update multi_get_bytes() to return a reference to rocksdb owned
memory; the caller can copy to owned memory if they require it
@steviez steviez force-pushed the bstore_multi_get_no_alloc branch from 413b71a to da2fc96 Compare January 30, 2025 07:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant