-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the ability to automatically select best batch size #725
Comments
Stale issue message |
Stale issue message |
@sarthakpati you can assign this to me |
Thanks! 👍🏽 |
Stale issue message |
Stale issue message |
Stale issue message |
Yeah, I used this option myself like last year, it can be useful but from what I have seen it often goes too far with the batch size, I remember that even despite tuning some OOMs happened to me. the tuner is configurable (at least via lightning), so maybe we just should keep in mind that we should stop a little earlier than the tuner would do (i.e. if the tuner chose some value decrease it by like 5% and the use during training). |
Sounds perfect, thank you! |
Stale issue message |
Is your feature request related to a problem? Please describe.
Currently, GaNDLF requires the user to select the optimal
batch_size
for the hardware they are training on, which is not very efficient in the case of hyper-parameter tuning.Describe the solution you'd like
Adding the ability to choose the best
batch_size
automatically based on every other hyper-parameter would make the process much smoother.Describe alternatives you've considered
N.A.
Additional context
The text was updated successfully, but these errors were encountered: