-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes #9
Comments
I have not actually tried doing this, so I am not sure. Feel free to give it a shot! |
thanks for your PR. I could build the container image from the Dockerfile and have created a deployment configuration for a k8s cluster. Vault is configured with an according app_role, the
Question: Is it necessary to run the container as a side-car to each existing k8s vault pod or is it possible to tell the snapshotter to detect the leader? Thanks for your help! |
-> This message showed that it came from a Follower Pod. How many Vault's pod do you have? Let's focus on Leader pod's log
-> It's not required to run as a side-car, you can use another kubernetes deployment with the correct value: Btw, show your |
Thanks for clarification, I changed the address to the internal
My
What could be wrong here? |
@devops-42 can you try to update addr from :
to:
If the issue stills same, try to validate connectivity by getting the shell exec to backup pod and run:
And show me the output. |
My bad, when cleaning up the config file I accidentally deleted the Concerning the
|
@devops-42 Then finally make sure that you're using Raft as storage, is that correct? Can you show your vault's config ? |
I do use Raft as storage, here's a redacted output of the
|
@devops-42 Alright, Let's rerun the backup pod, does it work ? You should tail the vault's logs to see if there is any clue |
It seems that the pod can connect to the leader pod of the vault, the log output of the leader is as follows:
But when checking the local filesystem of the Pod (which as a PVC attached) no snapshot file has been created. Any change to configure more debugging in the backup pod? |
@devops-42: Can you check your S3? Any new output from backup pod ? |
The backup pod error message stays the same. I could successfully connect from the backup pod to the S3 endpoint (we used minio) via
So I assume that my network setup is correct. |
@devops-42 : I haven't tried with Minio on this project and not sure if the current lib (https://github.com/aws/aws-sdk-go/tree/main/service/s3/s3manager) can support minio. @Lucretius can you pls confirm that ? Anw, I guess you should replace MinoS3's config from
|
at first, thanks for your patience :) I started a debug pod to play around with configuration and the binary. Tried to perform a backup using this (redacted) configuration:
The config file is located below
The according log output of the vault leader is:
Seems to be an issue with the communication with the vault leader. |
@devops-42 : Agreed, it failed at snapshot step. |
@luanphantiki |
@devops-42 unfortunately, this part returned by the vault-api sdk, there is no more details to see. I have also reproduced your configuration from my side and there is no issue
|
Simply a connectivity issue though. |
The problem could be related with the size of the vault.db. My vault.db file is currently over 2 GB. I checked whether there's a timeout issue when creating the snapshot by
@Lucretius |
@devops-42 seems to be the valid issue, but we should move this conversation to #20 |
@luanphantiki You're absolutely right. Thx for your help! |
Hi,
Can we run this agent inside a Kubernetes cluster?
The text was updated successfully, but these errors were encountered: