Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] lost GlobalAveragePooling #37

Open
Androsimus opened this issue Jul 30, 2021 · 0 comments
Open

[BUG] lost GlobalAveragePooling #37

Androsimus opened this issue Jul 30, 2021 · 0 comments

Comments

@Androsimus
Copy link

Androsimus commented Jul 30, 2021

In modules/models.py backbones are loaded without pretrained classification head (include_top=False) and then custom OutputLayer is added on the top. Ignoring pretrained classifier means cutting off GlobalAveragePooling layer, but OutputLayer doesn't contain it.

I propose something like this:

   def OutputLayer(embd_shape, w_decay=5e-4, name='OutputLayer'):
    
     def output_layer(x_in):
        x = inputs = Input(x_in.shape[1:])
        x = BatchNormalization()(x) # maybe this layer is redundunt
        x = GlobalAveragePooling2D()(x)
        x = Dropout(rate=0.5)(x)
        x = Flatten()(x)
        x = Dense(embd_shape, kernel_regularizer=_regularizer(w_decay))(x)
        x = BatchNormalization()(x)
        model = Model(inputs, x, name=name)
        return model(x_in)

    return output_layer

An effect of loosing GlobalAveragePooling is increasing backbone MobileNetV2 in size from 12 MB to 50 MB, but increasing in accuracy too, although for training MobileNetV2 must be used other hyperparameters which will increase val accuracy.

@Androsimus Androsimus changed the title lost GlobalAveragePooling [BUG] lost GlobalAveragePooling Aug 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant