Replies: 4 comments 2 replies
-
Please paste the output of "lsblk" on all the nodes. There seems to be an issue with discovering the device behind /dev/sda1 on one of the nodes which is causing the issue. If it's happening on only one node, I'd rather check the node's logs. |
Beta Was this translation helpful? Give feedback.
1 reply
-
May I ask, which OpenEBS storage engine you are looking to use for dynamic provisioning?
However, as far as I know, OpenEBS engines don't have support for "shared blockdevices".
Get Outlook for Android<https://aka.ms/AAb9ysg>
…________________________________
From: Mahidhar-K ***@***.***>
Sent: Wednesday, February 22, 2023 7:30:32 PM
To: openebs/openebs ***@***.***>
Cc: Vishnu Govind Attur ***@***.***>; Mention ***@***.***>
Subject: Re: [openebs/openebs] NDM pod logs throwing error handling unmanaged device (Discussion #3619)
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Thank you @avishnu<https://github.com/avishnu> for your response.
We wiped off the env but can recreate and share the lsblk output. However before we do that let me explain what we are trying to achieve.
We were trying to test the fix in this [#2536>]
That is why we created a single disk which was visible to all 3 nodes. lsblk on each node listed this (same) disk correctly. We are planning to run an application pod to which this disk would attach as a PV via dynamic provisioning. We would then bring down the node and the expectation is that that pod will move to one of the other two healthy nodes and Openebs will make the disk available on that node. Is the understanding and expectation correct?
—
Reply to this email directly, view it on GitHub<#3619 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AB3HYVTXMRY42FIMUQ5W3QDWYYLYBANCNFSM6AAAAAAVDBMFEU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi,
The cStor engine is a replicated storage engine. What this means is, you specify a replication factor for a cStor volume. For instance, if the replication factor is 3, the volume data is replicated on 3 different pools on 3 different nodes. If 1 worker node having 1 replica goes down, the volume will still be available as data is present on other 2 nodes.
Does this clarify your question?
Thanks.
Get Outlook for Android<https://aka.ms/AAb9ysg>
…________________________________
From: Mahidhar-K ***@***.***>
Sent: Thursday, February 23, 2023 10:26:10 AM
To: openebs/openebs ***@***.***>
Cc: Vishnu Govind Attur ***@***.***>; Mention ***@***.***>
Subject: Re: [openebs/openebs] NDM pod logs throwing error handling unmanaged device (Discussion #3619)
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Thank you @avishnu<https://github.com/avishnu> for quick response.
We are testing with cstor storage engine .
If shared block devices is not supported. Can you please help us explain how to test the worker node failover scenario (i.e., volume of the pod of the failed worker node should get detached from failed node and get attached to the new pod on the healthy worker node)
To achieve above, are you aware of any configuration for openebs cstor.
Can you please let us know if
—
Reply to this email directly, view it on GitHub<#3619 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AB3HYVUR35YI4LBCJ7QURJDWY3UWVANCNFSM6AAAAAAVDBMFEU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Closing the discussion, please refer to the latest comment. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Openebs Community,
Need a help.
We have created a volume which is attached to 3 worker nodes of K8s. Note that, they are made available to the worker nodes (i.e., visible via "lsblk" command), however not mounted on either of those worker nodes. Openebs with cstore is installed.
When we run "kubectl get bd -o wide", we do not see all the block devices from 3 Worker nodes at all the time. So, when I run above cmd for the first time, I see block device only from 2 of the 3 worker nodes; next execution of same command shows that all 3 worker nodes shows each of the device, i.e., 3 device visible; during the 3rd execution of the above command shows. These keeps on fluctuating! Can you help us understand what is the reason behind this ?
When I print the logs of the ndm pod for one of those Worker nodes, then I see the message like following:
E0221 12:56:31.000425 7 addhandler.go:358] unable to find parent device for /dev/sda1E0221 12:56:31.000438 7 addhandler.go:70] error handling unmanaged device /dev/sda1. error: error in getting parent device for /dev/sda1 from device hierarchyE0221 12:56:31.000452 7 eventhandler.go:94] error in getting parent device for /dev/sda1 from device hierarchyE0221 12:56:31.000522 7 udevprobe.go:149] Scan is in progressI0221 12:56:31.005343 7 blockdevicestore.go:131] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-9863fc2d6ad924e56ef8be3135f25f06I0221 12:56:31.075455 7 blockdevicestore.go:131] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-c59a65570b1403a8ad220b4d1e1a61e1
Similarly, we even see the 'Updated Block device" message like following:
Beta Was this translation helpful? Give feedback.
All reactions