-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes 1.30.5 support #23230
Comments
@karatkep I've tried to reproduce on Minikube with Kubernetes 1.31.0, but no luck |
@tolusha
|
@tolusha, as I can see, the issue is that the token is not being refreshed. It is generated for 1 hour, and after that time, the che-dashboard continues to use it despite its expiration. Is there any way to prompt the che-dashboard to refresh it before using it for kube-api calls? |
@karatkep |
@tolusha, |
@tolusha , @ibuziuk , We found the root cause of the issue. In Kubernetes 1.27.9, the token (located at the path /var/run/secrets/kubernetes.io/serviceaccount/token) is issued for one year, although it is refreshed every hour (or more precisely every 50 minutes). At the same time, in Kubernetes 1.30.5, the token is issued for one hour and is also refreshed every 50 minutes. However, Che (che-dashboard, che, and most likely che-gateway) caches this token at startup and uses it. Consequently, in Kubernetes 1.27.9 there is no problem since the token is issued for one year, but in Kubernetes 1.30.5, the problem begins after the first hour from startup because the cached token is used. |
@karatkep |
@tolusha |
@tolusha, @ibuziuk, Just to be on the same page - there is absolutely no pressure from my side. I just want to understand the current status and plans regarding this issue. On my part, I have already used one of the possible workarounds and written a CronJob that restarts the necessary Che pods. If other Eclipse Che users are facing or will face the same issue, I am more than willing to share this workaround. |
@karatkep Thank you for the follow-up and investigation details - #23230 (comment) I'm still wondering if the token lifetime is configurable on the k8s end in general? The issue has been planned for the next sprint (Nov 20 - Dec 10), however, so far @tolusha was not able to reproduce it on vanilla minikube. @karatkep also contributions from the Community are most welcome if you would like to change or update the caching mechanism in the project ;-) |
@ibuziuk, P.S. But frankly speaking, I do not like the option of using a long-lived token - it contradicts security best practices. It seems to me that whoever made this change (token lifetime: 1y -> 1h), it is a step in the right direction to use short-lived tokens. And in my opinion, a well-written application should not cache the token indefinitely. |
Unfortunately updating the |
@karatkep could you please elaborate more on what exactly does not work regardless the logs errors? Can you open dashboard page, navigate to user preferences? |
To summarize:
|
@karatkep my understanding is that so far @vinokurig was not able to reproduce the error even with the short-lived token. Steps to reproduce would be highly appreciated. Basically, all k8s interactions are happening using Fabric8-Kubernetes-Client for che-server and we plan to bump it to version 7.0.0 next sprint. |
I understand that the Kubernetes Client in use is 6.10.0. In this case, yes there's a TokenRefreshInterceptor that reloads the config in case there is an auth client error in the HTTP response. The interceptor logic will work and reload the Config as long as the Config was not provided manually. |
Hello @ibuziuk, @vinokurig. Please allow me to gather more details regarding this case. I will share them later today or tomorrow. |
@vinokurig, the dashboard issue arises when an user attempts to start the devworkspace. Please see the screenshot below: The endpoint |
Hello @vinokurig, just wanted to check if you need anything else from my side to unblock your investigation. |
Hello @karatkep, sorry for the late response, I managed to reproduce the unauthorized error on dashboard, investigating ... |
@karatkep could you please confirm that the issue is fixed with 7.97.0 release? |
@ibuziuk , We started receiving the From
From
|
@karatkep |
@tolusha,
I restarted the |
Do you mean that Che stops working after an hour? Can you access Dashboard, start a workspace? |
Did not see anything unusual after the service account token refresh in the |
As I mentioned above, I see following error in
|
@karatkep Can you access Dashboard, start a workspace having that? |
@vinokurig, yes, I can access the Dashboard, see all workspaces, and start a workspace. I am successfully redirected to the 'Starting workspace' page, but once all steps are completed, I am redirected to the IDE where I encounter a 418 error:
|
Looks like this became an editor issue @dkwon17 WDYT? |
Still can not reproduce, workspace starts even after the service account token refresh in the |
@karatkep OpenShift console => Workloads => Pods => Terminal for |
@RomanNikitenko, we use AKS (kubernetes). Could you please provide the name of the pod? |
@vinokurig @karatkep In your case it could be another container name - it depends on your devfile and container that is used for starting editor (first container in your devfile or container with |
@karatkep to find the container logs you should select the user's pod: |
@vinokurig one note only - editor entrypoint's logs can be found in the corresponding file using a terminal as I described in the #23230 (comment) |
Summary
Dear Community,
Could you please help me verify if Eclipse Che 7.93.0 supports Kubernetes 1.30.5? The che-dashboard and che pods stopped working when our Kubernetes cluster was updated to version 1.30.5.
Here is a sample of the error in the che-dashboard:
The same issue affects the che pod. It appears that both lost access to the Kubernetes API after the upgrade to version 1.30.5.
ServiceAccounts, Cluster Roles and Bindings are in place for both che-dashboard and che pods
Relevant information
No response
The text was updated successfully, but these errors were encountered: