You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I am writing large sdata objects to zarr, and the Kernel fails in an unpredictable manner.
I parse the image into sdata, a large mIF image (15, 44470, 73167) (8bit), with scale factors (5,5) to create a multiscale object. Then writing that simple sdata object seems to fail, (it takes about 20min, so only tried twice).
Before I send over this large data, are there any expected limitations from writing sdata objects into Zarr in a Jupyter notebook?
My naive concerns think about:
chunking
(what if chunk size is larger than downscaled image size?)
(can I chunk different scales dynamically? I use the parser to chunk.)
Hardware
(I use M2 Macbook Pro).
This kind of kernel failures are particularly frustrating because they corrupt the zarr object, I was writing some new elements (another very large image) and it crashed, and it killed the object.
The text was updated successfully, but these errors were encountered:
Thanks for reporting, and sorry to hear about this bug, it sounds indeed frustrating.
Before I send over this large data, are there any expected limitations from writing sdata objects into Zarr in a Jupyter notebook?
No, there is no expected limitation in .ipynb vs .py for this task.
chunking
(what if chunk size is larger than downscaled image size?)
(can I chunk different scales dynamically? I use the parser to chunk.)
Yes, you can rechunk the data after calling the parser and before saving as you see it fit. For instance check the code from here. Anyway it could be that your problem is due to a bug involving compression, please check here: #812 (comment).
Describe the bug
I am writing large sdata objects to zarr, and the Kernel fails in an unpredictable manner.
I parse the image into sdata, a large mIF image (15, 44470, 73167) (8bit), with scale factors (5,5) to create a multiscale object. Then writing that simple sdata object seems to fail, (it takes about 20min, so only tried twice).
Before I send over this large data, are there any expected limitations from writing sdata objects into Zarr in a Jupyter notebook?
My naive concerns think about:
(what if chunk size is larger than downscaled image size?)
(can I chunk different scales dynamically? I use the parser to chunk.)
(I use M2 Macbook Pro).
This kind of kernel failures are particularly frustrating because they corrupt the zarr object, I was writing some new elements (another very large image) and it crashed, and it killed the object.
The text was updated successfully, but these errors were encountered: