You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
a few small MLP heads that I want to train and checkpoint
The checkpointing logic is implemented as a LightningModule callback on_save_checkpoint, and it simply gets rid of the state_dict keys that belong to the frozen backbone. Correspondingly I have a custom on_load_checkpoint.
When training with KFoldTrainer I noticed that it first dumps the overall weights, a few GBs of stuff. But I really would like to restore the backbone weights from their defaults, not copy them all over the place.
How would it be possible to account for custom checkpoint logic when using KFoldTrainer?
Thanks for all tips and BTW thank you for this great library!
The text was updated successfully, but these errors were encountered:
Hi!
I have a partly-frozen model:
The checkpointing logic is implemented as a LightningModule callback
on_save_checkpoint
, and it simply gets rid of thestate_dict
keys that belong to the frozen backbone. Correspondingly I have a customon_load_checkpoint
.When training with KFoldTrainer I noticed that it first dumps the overall weights, a few GBs of stuff. But I really would like to restore the backbone weights from their defaults, not copy them all over the place.
How would it be possible to account for custom checkpoint logic when using KFoldTrainer?
Thanks for all tips and BTW thank you for this great library!
The text was updated successfully, but these errors were encountered: