You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained the CNN for predicting 101 classes. Now I want to use the LSTM for training attention weights. So I took the last globalaveragepool layer from the model and extracted the features from the frames. However, only training the LSTM for attention weights have smaller size classes. How does this affect the results? For example, Let's say if the only CNN part can predict 101 classes and when using attention on LSTM after CNN features. I still want for each frame 101 class predictions instead of limited input classes on the LSTM.
The text was updated successfully, but these errors were encountered:
I have trained the CNN for predicting 101 classes. Now I want to use the LSTM for training attention weights. So I took the last globalaveragepool layer from the model and extracted the features from the frames. However, only training the LSTM for attention weights have smaller size classes. How does this affect the results? For example, Let's say if the only CNN part can predict 101 classes and when using attention on LSTM after CNN features. I still want for each frame 101 class predictions instead of limited input classes on the LSTM.
The text was updated successfully, but these errors were encountered: