You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm on K8S 1.29 (from a RKE2 cluster), and running kube-vip as control plane only HA, and seeing a lot of pod restarts that translate of multiple access loss with the API, which is kind of annoying.
Hello.
I'm on K8S 1.29 (from a RKE2 cluster), and running kube-vip as control plane only HA, and seeing a lot of pod restarts that translate of multiple access loss with the API, which is kind of annoying.
The deployment is rather simple. Values are:
DaemonSet is running, obviously, on control nodes only. The pods look like:
kube-vip-g7d2z 1/1 Running 9 (2m26s ago) 66m
kube-vip-h5vqt 1/1 Running 6 (10m ago) 66m
kube-vip-nkhpr 1/1 Running 5 (2m13s ago) 66m
Logs showing:
level=fatal msg="lost leadership, restarting kube-vip"
Any clue?
The text was updated successfully, but these errors were encountered: