-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhance] Support pass arugments with update_params #796
Conversation
89d99e4
to
e6e4a6a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution! We should also update the OptimWrapper.update_params
@@ -161,7 +161,7 @@ def __init__(self, | |||
# the loss factor will always be the same as `_accumulative_counts`. | |||
self._remainder_counts = -1 | |||
|
|||
def update_params(self, loss: torch.Tensor) -> None: | |||
def update_params(self, loss: torch.Tensor, **kwargs) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def update_params(self, loss: torch.Tensor, **kwargs) -> None: | |
def update_params(self, loss: torch.Tensor, step_kwargs=None, zero_grad=None) -> None: |
Consider update_params
will call step
and zero_grad
in order, it could be better to provide step_kwargs
and zero_kwargs
arguments.
BTW, we should also update the description of the argument in docstring,
63267a7
to
97e2c43
Compare
97e2c43
to
9ad7e87
Compare
step_kwargs (dict): arguments for optimizer.step | ||
zero_kwargs (dict): arguments for optimizer.zero_grad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
step_kwargs (dict): arguments for optimizer.step | |
zero_kwargs (dict): arguments for optimizer.zero_grad | |
step_kwargs (dict, optional): Arguments for optimizer.step. | |
Defaults to None. | |
New in version v0.4.0. | |
zero_kwargs (dict, optional): Arguments for optimizer.zero_grad. | |
Defaults to None. | |
New in version v0.4.0. |
I have Updated the code as you suggested. thank you |
Motivation
optimizer.step()
supports kwargs (https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/optimizer/optimizer_wrapper.py#L211), butupdate_params
not (https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/optimizer/optimizer_wrapper.py#L176).this would be a problem if I want to write my own model wrapper.
Modification
Add kwargs to update_params
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist