You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, previous good results were based on LR=0.001.
So because of poor performance of transformers with LR=1e-3 (even though it's labeled as 1e-4) (#81) on Transformer baseline PredCls and Transformer+Semantic PredCls, I changed it to 0.16 (#115) and that bounced. Then I changed it to 0.016.
It seems to do better than 1e-3 on Transformer baseline PredCls and Transformer+Semantic PredCls but not be as good for +visual PredCls and +semantic+visual PredCls. In fact, smaller is better for anything visual for PredCls?
For SGCls, bigger (0.016) LR is better for Transformer Baseline SGCls, and better (but not great) Transformer+Semantic SGCls. +Visual is better but +Semantic+Visual is worse. Perhaps for SGCls, smaller is better is good for anything +semantic?
For SGGen, bigger LR (0.016) is better for baseline but terrible for all other augmentations.
Instead of 0.016, try smaller LRs for +visual and +semantic+visual like 0.005 (5e-3) or 0.008.
For Motif and VCTree, LR wasn't the cause for the bad BPL results.
GSC good BPL results were based on 0.04.
However, previous good results were based on LR=0.001.
So because of poor performance of transformers with LR=1e-3 (even though it's labeled as 1e-4) (#81) on Transformer baseline PredCls and Transformer+Semantic PredCls, I changed it to 0.16 (#115) and that bounced. Then I changed it to 0.016.
Instead of 0.016, try smaller LRs for +visual and +semantic+visual like 0.005 (5e-3) or 0.008.
For Motif and VCTree, LR wasn't the cause for the bad BPL results.
PredCls:
The text was updated successfully, but these errors were encountered: