-
Hi, Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
@ericspod , could you please help share some details? Thanks. |
Beta Was this translation helpful? Give feedback.
-
What is going on is what is described in the paper. Here each layer of the UNet structure is represented by a Sequential containing a layer of the downsample path, the skip connection, and a layer of the upsample path. As configured here the skip connection will take the input from the downsample path, pass it to the submodule which is either the next layer down or the bottom bottleneck layer of the whole structure, and then concat the output from that with the original input which is then returned. This output is fed to the upsample block which consists of the transpose convolution followed by whatever else the network is configured for. This I believe does conform with the published description and the typical UNet definition. |
Beta Was this translation helpful? Give feedback.
-
@ericspod Thank you for your reply! However, I still don't quite understand, which is mostly caused by this image (from the paper) I'm sorry to bother you again, but I hope you could help me out with this! Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
Sorry, I did state incorrectly that the paper is describing the current UNet. I think the implementation has changed from what's described in this figure, in the upsample layer the input from the previous layer and the skip connect are concatenated together before being passed to the transposed convolution. This figure shows the skip connect data being concatenated with the output from the transposed convolution which isn't how the current MONAI implementation works. Sorry for the confusion! |
Beta Was this translation helpful? Give feedback.
-
@ericspod Thanks! Then the network is completely clear to me now. Is there any particular reason why you deviate from it? Or just because it improved the performance? |
Beta Was this translation helpful? Give feedback.
-
The current version seemed the easier one to implement as well as being compatible with Torchscript. There is very little performance difference between subtle variations on the same architecture, how one trains and for how long has more impact. |
Beta Was this translation helpful? Give feedback.
Sorry, I did state incorrectly that the paper is describing the current UNet. I think the implementation has changed from what's described in this figure, in the upsample layer the input from the previous layer and the skip connect are concatenated together before being passed to the transposed convolution. This figure shows the skip connect data being concatenated with the output from the transposed convolution which isn't how the current MONAI implementation works. Sorry for the confusion!