Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workspace Pod goes into a stop/start loop when the devfile includes a ConfigMap Volume Mount #23011

Closed
cgruver opened this issue Jun 18, 2024 · 4 comments
Assignees
Labels
area/devworkspace-operator engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. kind/bug Outline of a bug - must adhere to the bug report template. team/B This team is responsible for the Web Terminal, the DevWorkspace Operator and the IDEs.

Comments

@cgruver
Copy link

cgruver commented Jun 18, 2024

Describe the bug

When attempting to create a workspace with a ConfigMap embedded in the workspace which is mounted as a volume, the workspace pod goes into a stop-start loop.

Note: This is not following the documentation for ConfigMaps - https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html/user_guide/using-credentials-and-configurations-in-workspaces#mounting-configmaps

This method is for creating a workspace with a workspace specific ConfigMap mounted.

Che version

7.86

Steps to reproduce

Create a workspace from the following devfile:

schemaVersion: 2.2.0
metadata:
  name: config-map-test
components:
- name: cm-config-volume
  openshift:
    deployByDefault: true
    inlined: |
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: my-configmap
      data:
        demo-txt: |
          key1=value1
          key2=value2
- name: my-container
  attributes:
    pod-overrides:
      spec:
        volumes: 
        - name: demo-config-map
          configMap:
            name: my-configmap
            items:
            - key: demo-txt
              path: demo.txt
    container-overrides:
      volumeMounts:
      - mountPath: /projects/config-map
        name: demo-config-map
  container:
    image: quay.io/cgruver0/che/dev-tools:latest
    memoryLimit: 1024Mi
    mountSources: true
  1. You will observe that the workspace pod goes into a stop-start loop.

  2. The ConfigMap is correctly created.

  3. The volume is configured in the Pod.

  4. It looks as though the che-code-injector init-container is failing. But the pod does not go into a crash loop.

Expected behavior

The ConfigMap is mounted at the mount point specified by the container-overrides

Runtime

OpenShift

Screenshots

Screenshot 2024-06-18 at 12 53 40 PM

Installation method

OperatorHub

Environment

macOS

Eclipse Che Logs

No response

Additional context

No response

@cgruver cgruver added the kind/bug Outline of a bug - must adhere to the bug report template. label Jun 18, 2024
@che-bot che-bot added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label Jun 18, 2024
@AObuchow
Copy link

@cgruver Thanks for reporting this. I've confirmed that this is a DevWorkspace Operator bug that needs to be fixed.

@AObuchow AObuchow added engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. area/devworkspace-operator team/B This team is responsible for the Web Terminal, the DevWorkspace Operator and the IDEs. and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels Jun 18, 2024
@AObuchow AObuchow moved this to 📅 Planned for this Sprint in Eclipse Che Team B Backlog Jun 27, 2024
@AObuchow
Copy link

@cgruver I looked into this further and found out what was going on: your configmap volume defined in your pod-overrides was missing the defaultMode field, which was then getting added to the pod spec automatically by the cluster. Since this field was being modified on the cluster, DWO noticed the expected spec you provided differed from the spec on the cluster, and would endlessly reconcile in an attempt to get the cluster spec to match the expected spec.

{"level":"info","ts":"2024-06-28T13:51:55-04:00","logger":"controllers.DevWorkspace","msg":
"Diff:   &v1.Deployment{
    ... // 2 ignored fields
    Spec: v1.DeploymentSpec{
      Replicas: &1,
      Selector: &{MatchLabels: {\"controller.devfile.io/devworkspace_id\": \"workspace88328b03fee14dd3\"}},
      Template: v1.PodTemplateSpec{
        ObjectMeta: {Name: \"workspace88328b03fee14dd3\", Namespace: \"devworkspace-controller\", Labels: {\"controller.devfile.io/creator\": \"\", \"controller.devfile.io/devworkspace_id\": \"workspace88328b03fee14dd3\", \"controller.devfile.io/devworkspace_name\": \"plain-devworkspace\"}},
        Spec: v1.PodSpec{
          Volumes: []v1.Volume(Inverse(cmpopts.SortSlices, []v1.Volume{
            {Name: \"workspace-metadata\", VolumeSource: {ConfigMap: &{LocalObjectReference: {Name: \"workspace88328b03fee14dd3-metadata\"}, DefaultMode: &420, Optional: &true}}},
            {
              Name: \"demo-config-map\",
              VolumeSource: v1.VolumeSource{
                ... // 16 identical fields
                FC:        nil,
                AzureFile: nil,
                ConfigMap: &v1.ConfigMapVolumeSource{
                  LocalObjectReference: {Name: \"my-configmap\"},
                  Items:                {{Key: \"demo-txt\", Path: \"demo.txt\"}},
-                 DefaultMode:          nil,
+                 DefaultMode:          &420,
                  Optional:             nil,
                },
                VsphereVolume: nil,
                Quobyte:       nil,
                ... // 8 identical fields
              },
            },
            {Name: \"claim-devworkspace\", VolumeSource: {PersistentVolumeClaim: &{ClaimName: \"claim-devworkspace\"}}},
          })),
          InitContainers: nil,
          Containers:     {{Name: \"web-terminal\", Image: \"quay.io/wto/web-terminal-tooling:next\", Command: {\"tail\", \"-f\", \"/dev/null\"}, Env: {{Name: \"PROJECTS_ROOT\", Value: \"/projects\"}, {Name: \"PROJECT_SOURCE\", Value: \"/projects\"}, {Name: \"DEVWORKSPACE_COMPONENT_NAME\", Value: \"web-terminal\"}, {Name: \"DEVWORKSPACE_NAMESPACE\", Value: \"devworkspace-controller\"}, ...}, ...}},
          ... // 3 ignored and 33 identical fields
        },
      },
      Strategy:        {Type: \"Recreate\"},
      MinReadySeconds: 0,
      ... // 2 ignored and 1 identical fields
    },
    ... // 1 ignored field
  }

Specifying the defaultMode field in your configmap volume resolves this endless reconcile loop, e.g.:

    - attributes:
        container-overrides:
          volumeMounts:
          - mountPath: /projects/config-map
            name: demo-config-map
        pod-overrides:
          spec:
            volumes:
            - configMap:
+                defaultMode: 256
                items:
                - key: demo-txt
                  path: demo.txt
                name: my-configmap
              name: demo-config-map

BTW, I was able to get the cluster vs expected spec diff in the DWO logs by setting config.enableExperimentalFeatures: true in the DevWorkspace Operator config. I then replaced the \t and \n with their respective characters to get the formatting to be readable. This might help to know for future weird cases like this :)

@cgruver do you think this is an acceptable resolution for this issue? I don't think there's much we can do on the DWO side to prevent issues like this, as it's up to the user to ensure the pod and container overrides fields won't conflict with the clusters defaults.

@AObuchow AObuchow moved this from 📅 Planned for this Sprint to 🚧 In Progress in Eclipse Che Team B Backlog Jun 28, 2024
@cgruver
Copy link
Author

cgruver commented Jun 28, 2024

THANK YOU!

@cgruver cgruver closed this as completed Jun 28, 2024
@github-project-automation github-project-automation bot moved this from 🚧 In Progress to ✅ Done in Eclipse Che Team B Backlog Jun 28, 2024
@AObuchow
Copy link

@cgruver my pleasure 🥳 thanks for closing the issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/devworkspace-operator engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. kind/bug Outline of a bug - must adhere to the bug report template. team/B This team is responsible for the Web Terminal, the DevWorkspace Operator and the IDEs.
Projects
Status: ✅ Done
Development

No branches or pull requests

3 participants