-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix/lr finder #1676
Bugfix/lr finder #1676
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok :)
What I forgot to ask in the original PR for the LR finder: Isn't the dumping and restoring of Trainer args a bit fragile? If we introduce new Trainer args that interfere with the LRFinder, we always have to remember adding them here for dumping and restoring? Otherwise it will introduce a bug. Or do I misunderstand the purpose?
The idea behind dumping and restoring trainer args is to take advantage of |
This pull request is now in conflict... :( |
@SkafteNicki it makes sense to use fit, that's very good :) we just have to keep it in mind when we add new trainer args :) 👍 |
Codecov Report
@@ Coverage Diff @@
## master #1676 +/- ##
======================================
Coverage 88% 88%
======================================
Files 69 69
Lines 4131 4135 +4
======================================
+ Hits 3655 3659 +4
Misses 476 476 |
This pull request is now in conflict... :( |
Before submitting
What does this PR do?
Fixes #1648 where the learning rate finder could not be used by users that prefer to pass
val_dataloaders
directly to the different methods.Fixed #1650 where the learning rate finder could not be used, when a
EarlyStopping
callback is passed to trainer.PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃