You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 4, 2022. It is now read-only.
I have a two node plumtree cluster [email protected] and [email protected] and I then execute plumtree_peer_service:leave([]) on p1 and that erlang node terminates. Afterwards I still get this in the logs of [email protected]:
08:06:57.917 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.370.0>)
08:07:07.918 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.376.0>)
08:07:17.919 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.381.0>)
08:07:27.920 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.386.0>)
08:07:37.921 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.391.0>)
08:07:47.922 [debug] started plumtree_metadata_manager exchange with '[email protected]' (<0.396.0>)
So it seems some state is not properly cleaned up. I believe the problem is in the update callback of plumtree_broadcast: https://github.com/helium/plumtree/blob/master/src/plumtree_broadcast.erl#L278. It seems to only set all_members state field to CurrentMembers if there are new cluster members, not if any has been removed, which I think is an error. The all_members state field will have a reference to [email protected] until either p1 comes back or some other node joins.
If this analysis seems correct, I'll be happy to make a PR for this.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have a two node plumtree cluster
[email protected]
and[email protected]
and I then executeplumtree_peer_service:leave([])
onp1
and that erlang node terminates. Afterwards I still get this in the logs of[email protected]
:So it seems some state is not properly cleaned up. I believe the problem is in the
update
callback ofplumtree_broadcast
: https://github.com/helium/plumtree/blob/master/src/plumtree_broadcast.erl#L278. It seems to only setall_members
state field toCurrentMembers
if there are new cluster members, not if any has been removed, which I think is an error. Theall_members
state field will have a reference to[email protected]
until eitherp1
comes back or some other node joins.If this analysis seems correct, I'll be happy to make a PR for this.
The text was updated successfully, but these errors were encountered: