You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @spencerkclark. I've found myself using xpartition for scoring workloads. Usually there is a big data reduction involved so the output does fit in memory. I've been using xpartition like this
import xpartition
block_list = []
n = 20
for i in range(n):
print(i)
index = output.z500.partition.indexers(n, i, dims=['initial_time'])
block = output.isel(index).compute()
block_list.append(block)
output = xr.concat(block_list, dim='initial_time')
Maybe it's worth adding some helper that does this.
The text was updated successfully, but these errors were encountered:
Neat, yeah, I'd be open to adding something like this if we can implement a version that handles partitions along n-dimensions. I've found myself in this situation before too. Keeping with accessors for the time being maybe we could spell it da.partition.compute(steps, dims)?
Hi @spencerkclark. I've found myself using xpartition for scoring workloads. Usually there is a big data reduction involved so the output does fit in memory. I've been using xpartition like this
Maybe it's worth adding some helper that does this.
The text was updated successfully, but these errors were encountered: