You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In modules/models.py backbones are loaded without pretrained classification head (include_top=False) and then custom OutputLayer is added on the top. Ignoring pretrained classifier means cutting off GlobalAveragePooling layer, but OutputLayer doesn't contain it.
I propose something like this:
def OutputLayer(embd_shape, w_decay=5e-4, name='OutputLayer'):
def output_layer(x_in):
x = inputs = Input(x_in.shape[1:])
x = BatchNormalization()(x) # maybe this layer is redundunt
x = GlobalAveragePooling2D()(x)
x = Dropout(rate=0.5)(x)
x = Flatten()(x)
x = Dense(embd_shape, kernel_regularizer=_regularizer(w_decay))(x)
x = BatchNormalization()(x)
model = Model(inputs, x, name=name)
return model(x_in)
return output_layer
An effect of loosing GlobalAveragePooling is increasing backbone MobileNetV2 in size from 12 MB to 50 MB, but increasing in accuracy too, although for training MobileNetV2 must be used other hyperparameters which will increase val accuracy.
The text was updated successfully, but these errors were encountered:
Androsimus
changed the title
lost GlobalAveragePooling
[BUG] lost GlobalAveragePooling
Aug 4, 2021
In modules/models.py backbones are loaded without pretrained classification head (include_top=False) and then custom OutputLayer is added on the top. Ignoring pretrained classifier means cutting off GlobalAveragePooling layer, but OutputLayer doesn't contain it.
I propose something like this:
An effect of loosing GlobalAveragePooling is increasing backbone MobileNetV2 in size from 12 MB to 50 MB, but increasing in accuracy too, although for training MobileNetV2 must be used other hyperparameters which will increase val accuracy.
The text was updated successfully, but these errors were encountered: