You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the lab UDF BP, the gen_kubeconfig.sh script connects to the UDF metadata API to fetch the access method of the K8s API. It then tries to use openssl to connect to this endpoint to fetch the certificates for this endpoint so it can then embed them in the generated kubeconfig. Unfortunately, the UDF team removed the ability for a component (vm) in a deployment to connect to the access method proxy to get the needed certificates.
As a result, the script currently fails, and the old, stale version of the kubeconfig is still the only one available for download. Likely, this will be the kubeconfg that was only valid before the blueprint was shutdown and nominated.
Possible workaround: we could create the kubeconfig anyway by removing the part of the script that attempts to download the certificates, and have the user fetch the certificates from the access method proxy by themselves, and embed them into the kubeconfig they downloaded.
The text was updated successfully, but these errors were encountered:
INTERNAL_IP=`curl -s metadata.udf/deployment | jq '.deployment.components[] | select(.name == "k3s") | .accessMethods.https[] | select(.label == "K3s API") | .internalIp' -r`
# Get the internal IP so we can fetch the UDF certificate (UDF components can no longer connect directly to access method hosts)
INTERNAL_PORT=`curl -s metadata.udf/deployment | jq '.deployment.components[] | select(.name == "k3s") | .accessMethods.https[] | select(.label == "K3s API") | .internalPort' -r`
# Get the UDF Access Method's CA and cert chain
CA=`openssl s_client -connect $INTERNAL_IP:$INTERNAL_PORT -showcerts 2>&1 </dev/null | sed -ne '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p'|base64 -w 0`
but it did not work. The server at $INTERNAL_IP:$INTERNAL_PORT uses the incorrect certificate. It returns a certificate issued by what appears to be an ephemeral CA created for the udf component. What we really need is the certificate for udf.com.
In the lab UDF BP, the
gen_kubeconfig.sh
script connects to the UDF metadata API to fetch the access method of the K8s API. It then tries to use openssl to connect to this endpoint to fetch the certificates for this endpoint so it can then embed them in the generated kubeconfig. Unfortunately, the UDF team removed the ability for a component (vm) in a deployment to connect to the access method proxy to get the needed certificates.As a result, the script currently fails, and the old, stale version of the kubeconfig is still the only one available for download. Likely, this will be the kubeconfg that was only valid before the blueprint was shutdown and nominated.
Possible workaround: we could create the kubeconfig anyway by removing the part of the script that attempts to download the certificates, and have the user fetch the certificates from the access method proxy by themselves, and embed them into the kubeconfig they downloaded.
The text was updated successfully, but these errors were encountered: