You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when a cluster of zookeeper is not running for some error, then decreasing replicas will delete pod automatically.
pod exec zookeeperTeardown.sh to connect zookeeper will fail ,then it will not remove node on this cluster of zookeeper.
But the pod have been delete without update zookeeper configure on zoo.cfg
@stop-coding Could you please let us know how to reproduce this issue. After this once, if zookeeper starts running is the replica set not updated correctly?
@stop-coding Could you please let us know how to reproduce this issue. After this once, if zookeeper starts running is the replica set not updated correctly?
Delete zk-1\zk-2 pod, make cluster of zookeeper unable to provide services.
"kubectl edit zk" that change replicas to 1
Wait some time, replicas will decrease to 1.
Now, zk-0 will not running forever until editing zoo.cfg correctly.
I think removing a pod needs to ensure atomicity, that include connect success and reconfig success...
Do you have a better suggestion?
stop-coding
changed the title
if zookeeper is not running, then decreasing replicas will make cluster of zookeeper Unrecoverable
If zookeeper is not running, then decreasing replicas will make cluster of zookeeper Unrecoverable
Oct 13, 2021
Description
when a cluster of zookeeper is not running for some error, then decreasing replicas will delete pod automatically.
pod exec zookeeperTeardown.sh to connect zookeeper will fail ,then it will not remove node on this cluster of zookeeper.
But the pod have been delete without update zookeeper configure on zoo.cfg
Importance
must-have
Location
ZNODE_PATH="/zookeeper-operator/$CLUSTER_NAME"
CLUSTERSIZE=
java -Dlog4j.configuration=file:"$LOG4J_CONF" -jar /root/zu.jar sync $ZKURL $ZNODE_PATH
echo "CLUSTER_SIZE=$CLUSTERSIZE, MyId=$MYID"
if [[ -n "$CLUSTERSIZE" && "$CLUSTERSIZE" -lt "$MYID" ]]; then
java -Dlog4j.configuration=file:"$LOG4J_CONF" -jar /root/zu.jar remove $ZKURL $MYID
echo $?
fi
Suggestions for an improvement
fix on zookeepercluster_controller.go:reconcileStatefulSet
If ClusterSize decrease, do reconfig remove here
The text was updated successfully, but these errors were encountered: