-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BugFix, Feature] Vmap randomness in losses #1740
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/1740
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (7 Unrelated Failures)As of commit 6ac65e1 with merge base 11a82c3 (): FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Maybe we could make this optional? I'm assuming that if this isn't the default for vmap it is because it comes with a drawback, we should investigate |
I found this on functorch:
We could make it an additional flag for the objective being error as default like |
After thinking about it, here's how I would implement that: Make a loss module attribute that handles that class LossModule(nn.Module):
...
_vmap_randomness = None # default is None
@property
def vmap_randomness(self):
if self._vmap_randomness is None:
# look for nn.Dropout modules
for m in self.modules():
if isinstance(m, (nn.Dropout, else?)):
self._vmap_randomness = "different"
break
else:
self._vmap_randomness = "error"
return self._vmap_randomness
def set_vmap_randomness(self, value):
self._vmap_randomness = value
# then adapt losses to do _vmap_func(..., randomness=self.vmap_randomness) In def _vmap_func(...):
try:
...
except RuntimeError as err:
if "vmap: called random operation while in randomness error mode" in str(err): # better to use re.match here but anyway
raise RuntimeError("some message that tells users to use loss_module.set_vmap_randomness") from err This has the following advantages:
We will also need tests for this :) Wdyt? |
Looks good to me, I will adapt one objective class and then we can recheck if its what we expect. I wonder if at some point we can put together all those general functions to a higher level objective class lets say OffpolicyObjective from which the specific objectives inherit. Such a higher level class would handle the vmap and also the make_value_estimator etc. Because right now the objective classes get bigger and bigger probably hard to read and understand for people who just want to see the td3/sac loss calculation. |
Turns out that self.modules does not include modules like Dropout. I updated the function accordingly using actor and qvalue modules directly like:
let me know what you think :) |
In general we can think about it but if there's a clear use case for it, not just some vmap compatibility. Plus I'm working on consistent dropout for RL, which works also on the online setting... |
Ah right bc we don't want the modules to appear in the do_break = False
for val in self.__dict__.values():
if isinstance(val, nn.Module):
for module in val.modules():
if isinstance(module, RANDOM_MODULE_LIST): # not only nn.Dropout is random, could be something else
self._vmap_randomness = "different"
do_break = True
break
if do_break:
# double break
break
else:
self._vmap_randomness = "error" (one guy claims that we should break nested loops with exception but I'd rather be dead in a ditch than doing that) |
You are right, we shouldn't expect them to be the same. I'll update it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this, I think we can drastically reduce the amount of code in this PR by reusing the same methods across classes.
Some tests would be helpful!
Any Idea regarding how we could test it effectively? Adding dropout as an option to the mock actors/critics in the tests for the objectives is an option but this would mean adding it for all of the objectives independently again. Not sure if ideal... |
@BY571 from torchrl.objectives import LossModule
import torch
from torchrl.objectives.utils import _vmap_func
from tensordict.nn import TensorDictModule as Mod
from torch import nn
from tensordict import TensorDict
def test_loss_vmap_random():
class MyLoss(LossModule):
def __init__(self):
super().__init__()
mod = Mod(nn.Dropout(0.1), in_keys=["obs"], out_keys=["action"])
self.convert_to_functional(mod, "mod", expand_dim=4)
self.vmap_mod = _vmap_func(self.mod, (None, 0))
def forward(self, td):
out = self.vmap_mod(td, self.mod_params)
return {"loss": out["action"].mean()}
loss_mod = MyLoss()
td = TensorDict({"obs": torch.randn(3, 4)}, [3])
loss_mod(td)
test_loss_vmap_random() Please clean up the code before putting it in ;) |
@@ -233,6 +234,38 @@ def set_advantage_keys_through_loss_test( | |||
) | |||
|
|||
|
|||
@pytest.mark.parametrize("device", get_default_devices()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me know what you think! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there, just a couple of minor edits!
Thanks so much
if vmap_randomness in ("different", "same") and dropout > 0.0: | ||
loss_module.set_vmap_randomness(vmap_randomness) | ||
|
||
loss_module(td)["loss"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe let's test that things actually fail if we don't call the set_vmap_randomness
before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't do loss_module.set_vmap_randomness(vmap_randomness)
and have a Module that uses randomness vmap_randomness sets default to "different". So its only if the user wants a specific vmap_randomness. I think there is no case in which we should expect an error, only if the user sets vmap_randomness manually to "error" and uses dropout for example.
I can add a test for that but not sure if thats what you meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only if the user sets vmap_randomness manually to "error" and uses dropout for example
Yes that is what I meant. Here we only test that the code runs, but we're not really checking that it would have been broken had we done things differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added it!
torchrl/objectives/utils.py
Outdated
nn.Dropout2d, | ||
nn.Dropout3d, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do these guys have a parent, common class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, all the Dropouts have a common parent _DropoutNd. Ill update it!
@@ -29,6 +31,8 @@ | |||
"run `loss_module.make_value_estimator(ValueEstimators.<value_fun>, gamma=val)`." | |||
) | |||
|
|||
RANDOM_MODULE_LIST = (dropout._DropoutNd,) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we keep it a tuple in case we are going to extend it in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
RuntimeError running off-policy examples with dropout.
Propose setting randomness flag to
randomness="different"
and notsame
as this would be the natural case running over the networks sequentially. Also, we want to have different randomness values in for example the ensemble Q-Networks.Motivation and Context
Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax
close #15213
if this solves the issue #15213Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!