-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(store): track store's contiguous head #239
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are getting there
Besides the comments, there are two more cases for head/subscription handling we need to support:
- Getting headers above contigious head
Height == 100
Append 150
GetByHeight 150 -> no error
- Subscribing for headers above contigious head
Height == 100
goA GetByHeight(150) -> block
Append(150) -> no error
goA -> unblock, no error
We can also do them in a follow up
Coming out as a result of reviewing #239
c6ef4ef
to
b267831
Compare
"height", newHead.Height(), "hash", newHead.Hash(), "err", err) | ||
} | ||
|
||
if err := s.ds.Put(ctx, headKey, b); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we decouple this from the parent context coming from advanceContiguousHead
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the previous thread I wanted to show why we need to do that. In short: if we entered if currHeight > prevHeight {
we definitely want to update the headKey
(even if parent ctx is done) to prevent desync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so you mean atomicity.
However, I don't get this entirely. We still have context and timeout that is gonna be canceled after 5 secs. If you remove it here completely and simply pass context.Backgrount()
, then yeah, its gonna wait for as long as needed until the key is updated and achieve what you describe. However, if we set the independent context for 5 secs, it is effectively the same as having it with the parent one with additional 5 secs on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idea was to create a separate one BECAUSE parent context can be cancelled BUT we have something to update.
This is the rare pattern when we don't need to propagate a context but create a new independent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BECAUSE parent context can be cancelled BUT we have something to update.
But if child context cancels on its own after 5 secs, what's the point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess to avoid complete deadlocking... Yeah
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking forever is also not the best option 'cause it can take...eternity. So, 2 edge cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if that happens, it means we are in the desynced state, which we will need to recover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think we have to try advancing on store Start. Imagine that store terminated without completing advancing, like in the case above, which means on restart it's gonna report wrong Head
until its Appended again, which is a bug imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree to desynced state 'cause headKey
and headers flush are separate things. So, let's propagate context then and leave it as it's.
Idea regarding Start is definitely for the next PR (if any).
Done.
"height", newHead.Height(), "hash", newHead.Hash(), "err", err) | ||
} | ||
|
||
if err := s.ds.Put(ctx, headKey, b); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so you mean atomicity.
However, I don't get this entirely. We still have context and timeout that is gonna be canceled after 5 secs. If you remove it here completely and simply pass context.Backgrount()
, then yeah, its gonna wait for as long as needed until the key is updated and achieve what you describe. However, if we set the independent context for 5 secs, it is effectively the same as having it with the parent one with additional 5 secs on it.
Introduce a new unexported
Store
field which tracks the highest contiguous header observed.Fixes #201