Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accounting for custom checkpointing behaviour #67

Open
ocramz opened this issue Sep 13, 2024 · 0 comments
Open

Accounting for custom checkpointing behaviour #67

ocramz opened this issue Sep 13, 2024 · 0 comments

Comments

@ocramz
Copy link

ocramz commented Sep 13, 2024

Hi!

I have a partly-frozen model:

  • the "backbone" part is frozen, a large network
  • a few small MLP heads that I want to train and checkpoint

The checkpointing logic is implemented as a LightningModule callback on_save_checkpoint, and it simply gets rid of the state_dict keys that belong to the frozen backbone. Correspondingly I have a custom on_load_checkpoint.

When training with KFoldTrainer I noticed that it first dumps the overall weights, a few GBs of stuff. But I really would like to restore the backbone weights from their defaults, not copy them all over the place.

How would it be possible to account for custom checkpoint logic when using KFoldTrainer?

Thanks for all tips and BTW thank you for this great library!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant