Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address Shawns CR on previous PR #270

Merged
merged 1 commit into from
Oct 29, 2024

Conversation

jewelltaylor
Copy link
Collaborator

PR Type

[Feature | Fix | Documentation | Other ]

Short Description

Short follow up to #268, just adding the documentation that @scarere suggested.

@jewelltaylor jewelltaylor requested a review from scarere October 29, 2024 15:32
Copy link
Collaborator

@scarere scarere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jewelltaylor jewelltaylor merged commit a3d0dfb into main Oct 29, 2024
6 checks passed
@jewelltaylor jewelltaylor deleted the update-lr-scheduler-documentation branch October 29, 2024 15:47
Copy link
Collaborator

@emersodb emersodb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just one typo

@@ -430,8 +430,10 @@ def __init__(
initial_lr (float): The initial learning rate of the optimizer.
max_steps (int): The maximum total number of steps across all FL rounds.
exponent (float): Controls how quickly LR descreases over time. Higher values
lead to more rapdid descent.
lead to more rapdid descent. Defaults to 0.9.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super minor but I think rapdid is supposed to be rapid?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants