You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, Engage does not support multiple shards of the same block per node because only one resource of a given type can be installed per node. This property of Engage enables our on-demand model for dependency management. However, multiple shards per node can be important in scenarios where you need multiple processes to maximize CPU usage or bandwidth. It is also helpful in testing.
Recommended solution: create multiple resource keys per block if the block can be sharded. The trick is to do this automatically. If we have a tighter integration between the drivers and blocks, this will be easier. For example, we can have python meta-programming (e.g. decorators) that automatically create resource definitions for some fixed number of shards. Might also look into creating resource subclasses on the fly, based on the topology file. That might also be harder.
When creating additional resources for a shard, we may need to also set configuration parameters for each shard. For example, we might need to assign a different TCP port for each shard so they don't all try to get the same one on a given machine. Alternatively, this could be some kind of dynamic parameter determined at runtime (need engage support for that).
The text was updated successfully, but these errors were encountered:
Currently, Engage does not support multiple shards of the same block per node because only one resource of a given type can be installed per node. This property of Engage enables our on-demand model for dependency management. However, multiple shards per node can be important in scenarios where you need multiple processes to maximize CPU usage or bandwidth. It is also helpful in testing.
Recommended solution: create multiple resource keys per block if the block can be sharded. The trick is to do this automatically. If we have a tighter integration between the drivers and blocks, this will be easier. For example, we can have python meta-programming (e.g. decorators) that automatically create resource definitions for some fixed number of shards. Might also look into creating resource subclasses on the fly, based on the topology file. That might also be harder.
When creating additional resources for a shard, we may need to also set configuration parameters for each shard. For example, we might need to assign a different TCP port for each shard so they don't all try to get the same one on a given machine. Alternatively, this could be some kind of dynamic parameter determined at runtime (need engage support for that).
The text was updated successfully, but these errors were encountered: