-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sablier is not stateless / HA ready in Kubernetes #484
Comments
Hello @dixneuf19 !
I am aware of these limitations, we should definitely change the chart to be a StatefulSet.
I initially planned to go with Redis but didn't want people to be bothered by setting this up. And I think this project needs to be able to run with a redis backend (that replaces tinykv). I will look into that in the coming months. For the Kubernetes operator, I planned on doing it but mainly for auto configuration based on known reverse proxies. What did you have in mind exactly ? Also how would you imagine configuring this through CRD ? For leases, I'm not familiar with that, could you please detail what you have in mind ? Thanks! |
Hi, Regarding the redis backend I think it would be a good and easy solution, since there is already a Store interface in the code. On the operator it could either be
Anyway the first solution should be the simplest for the moment ! |
You can use nats . A golang system that is much faster than Redis and very easy to integrate via nats.go. unlike Redis is can do global super clusters , so really scale outs .. I run it as a global super cluster in 3 dc’s with 3 nats in each DC. Zero failure points. any dc can go down and the client route to the nearest dc auto magically . No bgp anycast needed .. Redis is really pretty old hat these days inho .. sorry but just being frank. |
Yes Redis might be a "Maslow’s Hammer" for DevOps community, a familiar and abused tool. And the whole license change and ValKey alternatives should challenge this it a bit. Honestly, as long as the solution is lightweight and easy to operate, I am fine with it. |
Describe the bug
Hi, I joined a new team in the process of deploying your product to automatically stop and start some software stacks running in Kubernetes, depending on your great open-source software.
My coworker deployed Sablier as a StatefulSet, with only one replica, a PVC, and the file storage feature activated. This setup effectively maintains its state across restarts, making it somewhat functional.
However, it is quite a sub-optimal setup for Kubernetes. Since it is a StatefulSet with one replica, updating Sablier or just losing the pod means the whole Sablier service becomes unavailable for a while.
The StatefulSet with one replica is necessary because the state is saved in memory in the tinykv store. If we have several Sablier pods with a random load balancer (i.e., what you have with a Deployment and N replicas), the state would be inconsistent between pods, leading to premature stops for some apps.
I find all of this a bit brittle for a good Kubernetes deployment, ideally using as many stateless pods as possible with a remote distributed store. Another solution is to have a truly stateful Sablier app, with built-in clustering. And a last one would be going full into a Kubernetes Operator, using CRD and leases to acheive HA and leader election.
Anyway, here are my questions:
I think that your tool is a very interesting approach for on-demand environments and a good way to reduce load in cloud environments. I would be happy to help better support this kind of usage on Kubernetes.
Here is some context, but my point is quite agnostic of the version/reverse proxy used.
Context
Sablier version: 1.8.1
Provider: Kubernetes
Reverse proxy: Traefik v2.11
Running as a StatefulSet in Kubernetes
Anyway, thanks for your time on this FOSS software!
The text was updated successfully, but these errors were encountered: