Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the ability to automatically select best batch size #725

Open
sarthakpati opened this issue Nov 4, 2023 · 11 comments
Open

Add the ability to automatically select best batch size #725

sarthakpati opened this issue Nov 4, 2023 · 11 comments
Assignees
Labels
enhancement New feature or request

Comments

@sarthakpati
Copy link
Collaborator

sarthakpati commented Nov 4, 2023

Is your feature request related to a problem? Please describe.
Currently, GaNDLF requires the user to select the optimal batch_size for the hardware they are training on, which is not very efficient in the case of hyper-parameter tuning.

Describe the solution you'd like
Adding the ability to choose the best batch_size automatically based on every other hyper-parameter would make the process much smoother.

Describe alternatives you've considered
N.A.

Additional context

@sarthakpati sarthakpati added the enhancement New feature or request label Nov 7, 2023
Copy link
Contributor

github-actions bot commented Jan 7, 2024

Stale issue message

Copy link
Contributor

github-actions bot commented Mar 8, 2024

Stale issue message

@benmalef
Copy link
Contributor

@sarthakpati you can assign this to me

@sarthakpati
Copy link
Collaborator Author

@sarthakpati you can assign this to me

Thanks! 👍🏽

Copy link
Contributor

Stale issue message

Copy link
Contributor

Stale issue message

Copy link
Contributor

Stale issue message

@sarthakpati
Copy link
Collaborator Author

Using Lightning, this option [ref] should work. From the API docs [ref], it should return an int type.

@sarthakpati sarthakpati assigned szmazurek and unassigned benmalef Nov 6, 2024
@szmazurek
Copy link
Collaborator

Yeah, I used this option myself like last year, it can be useful but from what I have seen it often goes too far with the batch size, I remember that even despite tuning some OOMs happened to me. the tuner is configurable (at least via lightning), so maybe we just should keep in mind that we should stop a little earlier than the tuner would do (i.e. if the tuner chose some value decrease it by like 5% and the use during training).

@sarthakpati
Copy link
Collaborator Author

Sounds perfect, thank you!

Copy link
Contributor

github-actions bot commented Jan 6, 2025

Stale issue message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants