-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide an API for tree hashing #436
Comments
There's an undocumented There's also a fork of this repo at https://github.com/n0-computer/iroh-blake3, which exposes a It's probably time we designed a stable, documented API for this into the
|
crosslink #82 |
I'm working on a blake3 tree traversal API to support the logic needed to implement a bao-like incremental producer / consumer. Essentially, a bunch of implementation math so you can arbitrarily walk through a logical blake3 node tree (divorced from any actual data, except for the total content length). I think it would make sense for it to live in the official blake3 repo; I would be open to contributing it if there was sufficient interest. |
The blake3 crate provides some methods for incrementally hashing a byte array (viz.
update_rayon
,update_mmap_rayon
), but these lock the user into using a specific framework for parallelism, and basically cannot work in situations where multiple separate computers want to collaborate to produce a single valid BLAKE3 hash tree. The algorithm itself is designed around tree-hashing as a paradigm, but it doesn't seem to expose a public API for interacting with that tree representation, emitting unfinalized hashes, or ingesting unfinalized hashes for inclusion into a larger file.Of course, users who want these capabilities can simply design their own Merkle tree formats atop blake3, but it would be nice to not have a boundary between spec-compliant blake3 hashes (as emitted by this crate) and bespoke Merkle trees (as emitted by people's homemade formats).
The text was updated successfully, but these errors were encountered: