Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

101 no regularization grad #104

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

101 no regularization grad #104

wants to merge 12 commits into from

Conversation

wsdewitt
Copy link
Contributor

@wsdewitt wsdewitt commented Feb 23, 2021

Description

Make model.FullyConnected.regularization_loss() differentiable.

Closes #101

Tests

tests/test_model.py

Checklist:

  • The code uses informative and accurate variable and function names
  • The functionality is factored out into functions and methods with logical interfaces
  • Comments are up to date, document intent, and there are no commented-out code blocks
  • Commenting and/or documentation is sufficient for others to be able to understand intent and implementation
  • TODOs have been eliminated from the code
  • The corresponding issue number (e.g. #278) has been searched for in the code to find relevant notes
  • Documentation has been redeployed

@wsdewitt wsdewitt requested review from matsen and zorian15 February 23, 2021 17:44
@wsdewitt
Copy link
Contributor Author

wsdewitt commented Feb 23, 2021

My current challenge is that loss.backward() returns None in my new test module, so I'm having trouble validating the gradient.

@wsdewitt
Copy link
Contributor Author

Ok I think this is ready for review

Copy link
Collaborator

@zorian15 zorian15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sick, looks good -- I don't have any complaints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

No grad for regularization loss
3 participants