-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chisel client keeps getting connection refused when connecting to exitnode #152
Comments
Could you show what deployment/pod the operator produced? |
Hello, Manifests are essentially what are described in the readme. Relevant secret is in the chisel namespace. ExitNodeProvisioner.yaml:
ExitNode.yaml:
Trying to connect with the Digital Ocean Droplet -> Access -> Droplet Console (a web console): I can't find any audit history in DigitalOcean, other than a "created" event Edit:
|
Just saw you asked for spec of the pod produced:
edit: |
Interestingly, Digital Ocean sent me an email with login credentials for the new droplet. |
I'd like the pod spec in YAML as in what CLI args and envars it produced and stuff, Deployment is also fine edit: cli args are there but what about the environment variables |
As I now realize is apparent:
It seems the cloud-init generator is creating a file: I tried SSHing via the chisel@ip with the password found in the secret, and I got auth failure. |
Actually this might be related to #141 |
That is normal behavior. We deploy a bare cloud-init configuration which, in DO's case, only lets you log in through root credentials provided from their email address. We rely on the
Does the pod try to import the credential secret? Also could you try the #142 branch as the image? |
Hmm, I'm going to wipe my cluster and try a fresh installation. I'll get back to you in an hour or so... |
Yeh, I can now confirm that a fresh install on a fresh cluster is correctly provisioning an ExitNode on DigitalOcean and connecting to it. I started with "operator-provisioned auto-allocated" then moved to "operator-provisioned manually-allocated". When I was looking at the log spam, I did see something along the lines of "warning no auth password, this is a security risk". I think that was when I added the parameter. |
So, this has cropped up again.
Steps I've taken to try and remedy: I tried power cycling the droplet from DO dashboard, and still had SSH auth failures. I SSHd into the droplet, and did a Correct me if I'm wrong, but I don't believe Ubuntu uses the I tried I then What is strange is that the Websocket seems to connect fine. I am building the #156 0.5 staging PR. Before wiping, I tried getting chisel operator to auto-provision everything.
And the operator sticks on "Waiting for exit node to be provisioned". UPDATE: Still cannot connect after a cluster wipe & reinstall |
I still couldn't reproduce this in 0.5 so I don't even know what went wrong :/ I never really tried Envoy and we don't really support this use case that much since you're supposed to just expose a service directly to the cloud without MetalLB or Envoy with something like NGINX or Traefik.
|
Should I try a different cloud provider? |
The issue isn't the cloud provider but how Envoy is redirecting things, I don't know what it's doing to cause this. You should try using another proxy instead of Envoy for now, something like Traefik of NGINX. |
the systemd service explictly loads from the This is not a DigitalOcean issue but some kind of issue on your networking setup that doesn't let you connect to port 9090 of the VPS node. I cannot reproduce this in any way on a fresh installl without Envoy and MetalLB inside a k3d container.
It's never meant to be in the global environment, see
Chisel does not use the system's PAM authentication system. The SSH transport it advertises are on Websocket port 9090 which then has a SSH transport layer inside. You do not have to create a dedicated user for chisel. Please export your generated deployment/pod in YAML format and the service. I cannot help you if I do not know what it's actually outputting. kubectl get deployment <deployment_name> -n <namespace> -o yaml
kubectl get service <envoy_generated_service> -n <namespace> -o yaml
kubectl get exitnode <exit_node_name> -n <namespace> -o yaml Also, Chisel Operator's ExitNodes are namespaced unless specified otherwise, so you would need to put the DO provisioner inside the |
Hello, Anyway, I have wiped my cluster.
After wiping and redeploying everything with these 3 changes, the tunnel popped right up and connected (also the log spam has cleared). I am going to try changing the deployment order so the ExitNode and ExitNodeProvisioner get deployed at the same time as the gateway, see if issues resurface. Anything else you would like me to test? |
Yes. Please give me the definitions. Also, 0.5.0 has been released so use that version instead. |
Hi
I'm having trouble using chisel operator in my kuber cluster. i used
kubectl apply -k https://github.com/FyraLabs/chisel-operator
to install it in cluster.
My exit node is a VPS of mine which hosts the chisel server and works fine. i even tried
chisel client
from my local to verify that the chisel server is good to go, and it is.I applied a simple hello world pod in cluster for testing the operator functionality. I then applied the loadbalancer svc and operator created a pod for chisel client. but i keep getting this log from this pod:
I tried running chisel client command without using the operator and inside a random pod in cluster, and it worked. but with the operator, i keep getting this error.
this is my chisel server config:
and these are exitnode and svc manifest:
Can someone help me ?
The text was updated successfully, but these errors were encountered: