-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspace Pod goes into a stop/start loop when the devfile includes a ConfigMap Volume Mount #23011
Comments
@cgruver Thanks for reporting this. I've confirmed that this is a DevWorkspace Operator bug that needs to be fixed. |
@cgruver I looked into this further and found out what was going on: your configmap volume defined in your pod-overrides was missing the {"level":"info","ts":"2024-06-28T13:51:55-04:00","logger":"controllers.DevWorkspace","msg":
"Diff: &v1.Deployment{
... // 2 ignored fields
Spec: v1.DeploymentSpec{
Replicas: &1,
Selector: &{MatchLabels: {\"controller.devfile.io/devworkspace_id\": \"workspace88328b03fee14dd3\"}},
Template: v1.PodTemplateSpec{
ObjectMeta: {Name: \"workspace88328b03fee14dd3\", Namespace: \"devworkspace-controller\", Labels: {\"controller.devfile.io/creator\": \"\", \"controller.devfile.io/devworkspace_id\": \"workspace88328b03fee14dd3\", \"controller.devfile.io/devworkspace_name\": \"plain-devworkspace\"}},
Spec: v1.PodSpec{
Volumes: []v1.Volume(Inverse(cmpopts.SortSlices, []v1.Volume{
{Name: \"workspace-metadata\", VolumeSource: {ConfigMap: &{LocalObjectReference: {Name: \"workspace88328b03fee14dd3-metadata\"}, DefaultMode: &420, Optional: &true}}},
{
Name: \"demo-config-map\",
VolumeSource: v1.VolumeSource{
... // 16 identical fields
FC: nil,
AzureFile: nil,
ConfigMap: &v1.ConfigMapVolumeSource{
LocalObjectReference: {Name: \"my-configmap\"},
Items: {{Key: \"demo-txt\", Path: \"demo.txt\"}},
- DefaultMode: nil,
+ DefaultMode: &420,
Optional: nil,
},
VsphereVolume: nil,
Quobyte: nil,
... // 8 identical fields
},
},
{Name: \"claim-devworkspace\", VolumeSource: {PersistentVolumeClaim: &{ClaimName: \"claim-devworkspace\"}}},
})),
InitContainers: nil,
Containers: {{Name: \"web-terminal\", Image: \"quay.io/wto/web-terminal-tooling:next\", Command: {\"tail\", \"-f\", \"/dev/null\"}, Env: {{Name: \"PROJECTS_ROOT\", Value: \"/projects\"}, {Name: \"PROJECT_SOURCE\", Value: \"/projects\"}, {Name: \"DEVWORKSPACE_COMPONENT_NAME\", Value: \"web-terminal\"}, {Name: \"DEVWORKSPACE_NAMESPACE\", Value: \"devworkspace-controller\"}, ...}, ...}},
... // 3 ignored and 33 identical fields
},
},
Strategy: {Type: \"Recreate\"},
MinReadySeconds: 0,
... // 2 ignored and 1 identical fields
},
... // 1 ignored field
} Specifying the defaultMode field in your configmap volume resolves this endless reconcile loop, e.g.: - attributes:
container-overrides:
volumeMounts:
- mountPath: /projects/config-map
name: demo-config-map
pod-overrides:
spec:
volumes:
- configMap:
+ defaultMode: 256
items:
- key: demo-txt
path: demo.txt
name: my-configmap
name: demo-config-map BTW, I was able to get the cluster vs expected spec diff in the DWO logs by setting @cgruver do you think this is an acceptable resolution for this issue? I don't think there's much we can do on the DWO side to prevent issues like this, as it's up to the user to ensure the pod and container overrides fields won't conflict with the clusters defaults. |
THANK YOU! |
@cgruver my pleasure 🥳 thanks for closing the issue :) |
Describe the bug
When attempting to create a workspace with a ConfigMap embedded in the workspace which is mounted as a volume, the workspace pod goes into a stop-start loop.
Note: This is not following the documentation for ConfigMaps - https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html/user_guide/using-credentials-and-configurations-in-workspaces#mounting-configmaps
This method is for creating a workspace with a workspace specific ConfigMap mounted.
Che version
7.86
Steps to reproduce
Create a workspace from the following devfile:
You will observe that the workspace pod goes into a stop-start loop.
The ConfigMap is correctly created.
The volume is configured in the Pod.
It looks as though the
che-code-injector
init-container is failing. But the pod does not go into a crash loop.Expected behavior
The ConfigMap is mounted at the mount point specified by the container-overrides
Runtime
OpenShift
Screenshots
Installation method
OperatorHub
Environment
macOS
Eclipse Che Logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: