-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Followed the instructions but the pod is stuck at the Pending state #164
Comments
Hello @gasci, Thanks for using this project. Can you provide a |
Here is the error that I am receiving:
|
From your logs:
It looks like you’ve a single node in your cluster, which has a problem with the disk used. This issue isn’t chart-related, but is directly linked with your infrastructure. I suggest you to begin with |
This is my local cluster. Do I need a more sophisticated one with much more resources?
From: Tristan Dietz ***@***.***>
Date: Wednesday, 18. December 2024 at 08:17
To: Pyrrha/calcom-helm ***@***.***>
Cc: Göktuğ Aşcı ***@***.***>, Mention ***@***.***>
Subject: Re: [Pyrrha/calcom-helm] Followed the instructions but the pod is stuck at the Pending state (Issue #164)
From your logs:
Events:
Type Reason Age From Message
…________________________________
Warning FailedScheduling 92s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
It looks like you’ve a single node in your cluster, which has a problem with the disk used. This issue isn’t chart-related, but is directly linked with your infrastructure.
I suggest you to begin with `kubectl describe node` and iterate from there. [This ***@***.***_alex/kubernetes-node-disk-pressure-69acffc5fad1) can help you understand the issue and how to address it.
—
Reply to this email directly, view it on GitHub<#164 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHBDRJO7COJTVLFWVSTMT5D2GEOQHAVCNFSM6AAAAABTXE5NBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNJQGUZTQMZQGY>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
And I haven’t provisined a PV. Is it mandatory?
From: Göktuğ Aşcı ***@***.***>
Date: Wednesday, 18. December 2024 at 14:25
To: Pyrrha/calcom-helm ***@***.***>, Pyrrha/calcom-helm ***@***.***>
Cc: Mention ***@***.***>
Subject: Re: [Pyrrha/calcom-helm] Followed the instructions but the pod is stuck at the Pending state (Issue #164)
This is my local cluster. Do I need a more sophisticated one with much more resources?
From: Tristan Dietz ***@***.***>
Date: Wednesday, 18. December 2024 at 08:17
To: Pyrrha/calcom-helm ***@***.***>
Cc: Göktuğ Aşcı ***@***.***>, Mention ***@***.***>
Subject: Re: [Pyrrha/calcom-helm] Followed the instructions but the pod is stuck at the Pending state (Issue #164)
From your logs:
Events:
Type Reason Age From Message
…________________________________
Warning FailedScheduling 92s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
It looks like you’ve a single node in your cluster, which has a problem with the disk used. This issue isn’t chart-related, but is directly linked with your infrastructure.
I suggest you to begin with `kubectl describe node` and iterate from there. [This ***@***.***_alex/kubernetes-node-disk-pressure-69acffc5fad1) can help you understand the issue and how to address it.
—
Reply to this email directly, view it on GitHub<#164 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHBDRJO7COJTVLFWVSTMT5D2GEOQHAVCNFSM6AAAAABTXE5NBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNJQGUZTQMZQGY>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
For the calcom chart you’re using, you dont need persistence. For now this feature doesn’t come with the chart. You may run this chart on your cluster, however your node is currently unhealthy. You can begin your investigation by looking for the why, by describing your node status using |
You are right. I fixed it but now I am getting an error because I am running an M1 chip.
Failed to pull image "calcom/cal.com:v4.7.8": no matching manifest for linux/arm64/v8 in the manifest list entries
How can I overcome this?
From: Tristan Dietz ***@***.***>
Date: Wednesday, 18. December 2024 at 16:16
To: Pyrrha/calcom-helm ***@***.***>
Cc: Göktuğ Aşcı ***@***.***>, Mention ***@***.***>
Subject: Re: [Pyrrha/calcom-helm] Followed the instructions but the pod is stuck at the Pending state (Issue #164)
For the calcom chart you’re using, you doesn’t need persistence. For now this feature doesn’t come with the chart.
You may run this chart on your cluster, however your node is currently unhealthy. You can begin your investigation by looking for the why, by describing your node status using kubectl describe node as said previously, and inspect the status of your node.
—
Reply to this email directly, view it on GitHub<#164 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHBDRJLDXZ2PLUH2UDNUWA32GGGTHAVCNFSM6AAAAABTXE5NBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNJRGU4DQNBUGE>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
As far as I know, the docker image provided by cal.com (used by this chart) only supports amd64 architecture. You can see it on their official Docker Hub tags page. There’s currently an issue open for that on their repository: calcom/docker#358 Closing the issue for now, as the problems are not related to this project. |
Hi, thank you for maintaining this helm chart.
The text was updated successfully, but these errors were encountered: