Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix, Feature] Vmap randomness in losses #1740

Merged
merged 21 commits into from
Jan 9, 2024
Merged

Conversation

BY571
Copy link
Contributor

@BY571 BY571 commented Dec 7, 2023

Description

RuntimeError running off-policy examples with dropout.

RuntimeError: vmap: called random operation while in randomness error mode. Please either use the 'same' or 'different' randomness flags on vmap or perform the randomness operation out of vmap

Propose setting randomness flag to randomness="different" and not same as this would be the natural case running over the networks sequentially. Also, we want to have different randomness values in for example the ensemble Q-Networks.

Motivation and Context

Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax close #15213 if this solves the issue #15213

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Dec 7, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/1740

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (7 Unrelated Failures)

As of commit 6ac65e1 with merge base 11a82c3 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@vmoens
Copy link
Contributor

vmoens commented Dec 7, 2023

Maybe we could make this optional? I'm assuming that if this isn't the default for vmap it is because it comes with a drawback, we should investigate

@BY571
Copy link
Contributor Author

BY571 commented Dec 7, 2023

Maybe we could make this optional? I'm assuming that if this isn't the default for vmap it is because it comes with a drawback, we should investigate

I found this on functorch:

The flag can only be passed to vmap and can take on 3 values, “error,” “different,” or “same,” defaulting to error. Under “error” mode, any call to a random function will produce an error asking the user to use one of the other two flags based on their use case.

We could make it an additional flag for the objective being error as default like vmap_randomness="error". But not sure, it's only important when using the objective with networks that utilize dropout.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 7, 2023
@vmoens vmoens changed the title Vmap randomness [BugFix, Featuree] Vmap randomness in losses Dec 7, 2023
@vmoens vmoens added bug Something isn't working enhancement New feature or request labels Dec 7, 2023
@vmoens
Copy link
Contributor

vmoens commented Dec 7, 2023

After thinking about it, here's how I would implement that:

Make a loss module attribute that handles that

class LossModule(nn.Module):
    ...
    _vmap_randomness = None # default is None
    @property
    def vmap_randomness(self):
        if self._vmap_randomness is None:
            # look for nn.Dropout modules
            for m in self.modules():
                if isinstance(m, (nn.Dropout, else?)):
                    self._vmap_randomness = "different"
                    break
            else:
                self._vmap_randomness = "error"

        return self._vmap_randomness
    def set_vmap_randomness(self, value):
        self._vmap_randomness = value
    # then adapt losses to do _vmap_func(..., randomness=self.vmap_randomness)

In _vmap_func we do:

def _vmap_func(...):
    try:
        ...
    except RuntimeError as err:
        if "vmap: called random operation while in randomness error mode" in str(err): # better to use re.match here but anyway
            raise RuntimeError("some message that tells users to use loss_module.set_vmap_randomness") from err

This has the following advantages:

  • if dropout is in the modules, we don't need to care about a thing
  • if the error is encountered, users will have a way out explicitely stated in the error message
  • In all cases where we don't need to change the default behaviour it isn't changed

We will also need tests for this :)

Wdyt?

@BY571
Copy link
Contributor Author

BY571 commented Dec 8, 2023

Looks good to me, I will adapt one objective class and then we can recheck if its what we expect.

I wonder if at some point we can put together all those general functions to a higher level objective class lets say OffpolicyObjective from which the specific objectives inherit. Such a higher level class would handle the vmap and also the make_value_estimator etc. Because right now the objective classes get bigger and bigger probably hard to read and understand for people who just want to see the td3/sac loss calculation.

@BY571
Copy link
Contributor Author

BY571 commented Dec 8, 2023

Turns out that self.modules does not include modules like Dropout. I updated the function accordingly using actor and qvalue modules directly like:

    @property
    def vmap_randomness(self):
        if self._vmap_randomness is None:
            # look for nn.Dropout modules
            dropouts = (torch.nn.Dropout, torch.nn.Dropout2d, torch.nn.Dropout3d)
            for a, q in zip(
                self.actor_network.modules(), self.qvalue_network.modules()
            ):
                if isinstance(a, dropouts) or isinstance(q, dropouts):
                    self._vmap_randomness = "different"
                    break
            else:
                self._vmap_randomness = "error"

        return self._vmap_randomness

let me know what you think :)

@vmoens
Copy link
Contributor

vmoens commented Dec 8, 2023

OffpolicyObjective

In general we can think about it but if there's a clear use case for it, not just some vmap compatibility. Plus I'm working on consistent dropout for RL, which works also on the online setting...

@vmoens
Copy link
Contributor

vmoens commented Dec 8, 2023

    @property
    def vmap_randomness(self):
        if self._vmap_randomness is None:
            # look for nn.Dropout modules
            dropouts = (torch.nn.Dropout, torch.nn.Dropout2d, torch.nn.Dropout3d)
            for a, q in zip(
                self.actor_network.modules(), self.qvalue_network.modules()
            ):
                if isinstance(a, dropouts) or isinstance(q, dropouts):
                    self._vmap_randomness = "different"
                    break
            else:
                self._vmap_randomness = "error"

        return self._vmap_randomness

Ah right bc we don't want the modules to appear in the modules list actually!
The problem with this is that it breaks as soon as you name your actor_network something like actor.
Also why do we consider that actor and qvalue networks have the same number of modules? I don't think we should use zip, we should iterate over one and then the other. Something like

do_break = False
for val in self.__dict__.values():
    if isinstance(val, nn.Module):
        for module in val.modules():
             if isinstance(module, RANDOM_MODULE_LIST): # not only nn.Dropout is random, could be something else
                 self._vmap_randomness = "different"
                 do_break = True
                 break
    if do_break:
        # double break
        break
else:
    self._vmap_randomness = "error"

(one guy claims that we should break nested loops with exception but I'd rather be dead in a ditch than doing that)

@BY571
Copy link
Contributor Author

BY571 commented Dec 11, 2023

Also why do we consider that actor and qvalue networks have the same number of modules? I don't think we should use zip, we should iterate over one and then the other.

You are right, we shouldn't expect them to be the same. I'll update it!

torchrl/objectives/td3.py Outdated Show resolved Hide resolved
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this, I think we can drastically reduce the amount of code in this PR by reusing the same methods across classes.
Some tests would be helpful!

torchrl/objectives/utils.py Outdated Show resolved Hide resolved
torchrl/objectives/utils.py Outdated Show resolved Hide resolved
torchrl/objectives/utils.py Outdated Show resolved Hide resolved
torchrl/objectives/td3.py Outdated Show resolved Hide resolved
torchrl/objectives/sac.py Outdated Show resolved Hide resolved
torchrl/objectives/redq.py Outdated Show resolved Hide resolved
torchrl/objectives/redq.py Outdated Show resolved Hide resolved
torchrl/objectives/sac.py Outdated Show resolved Hide resolved
torchrl/objectives/sac.py Outdated Show resolved Hide resolved
torchrl/objectives/td3.py Outdated Show resolved Hide resolved
@BY571
Copy link
Contributor Author

BY571 commented Dec 14, 2023

Some tests would be helpful!

Any Idea regarding how we could test it effectively? Adding dropout as an option to the mock actors/critics in the tests for the objectives is an option but this would mean adding it for all of the objectives independently again. Not sure if ideal...

@vmoens
Copy link
Contributor

vmoens commented Dec 15, 2023

@BY571
You can adapt this:

from torchrl.objectives import LossModule
import torch
from torchrl.objectives.utils import _vmap_func
from tensordict.nn import TensorDictModule as Mod
from torch import nn
from tensordict import TensorDict

def test_loss_vmap_random():
    class MyLoss(LossModule):
        def __init__(self):
            super().__init__()
            mod = Mod(nn.Dropout(0.1), in_keys=["obs"], out_keys=["action"])
            self.convert_to_functional(mod, "mod", expand_dim=4)
            self.vmap_mod = _vmap_func(self.mod, (None, 0))

        def forward(self, td):
            out = self.vmap_mod(td, self.mod_params)
            return {"loss": out["action"].mean()}
    loss_mod = MyLoss()
    td = TensorDict({"obs": torch.randn(3, 4)}, [3])
    loss_mod(td)
test_loss_vmap_random()

Please clean up the code before putting it in ;)

@@ -233,6 +234,38 @@ def set_advantage_keys_through_loss_test(
)


@pytest.mark.parametrize("device", get_default_devices())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know what you think! :)

Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost there, just a couple of minor edits!
Thanks so much

if vmap_randomness in ("different", "same") and dropout > 0.0:
loss_module.set_vmap_randomness(vmap_randomness)

loss_module(td)["loss"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe let's test that things actually fail if we don't call the set_vmap_randomness before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't do loss_module.set_vmap_randomness(vmap_randomness) and have a Module that uses randomness vmap_randomness sets default to "different". So its only if the user wants a specific vmap_randomness. I think there is no case in which we should expect an error, only if the user sets vmap_randomness manually to "error" and uses dropout for example.

I can add a test for that but not sure if thats what you meant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only if the user sets vmap_randomness manually to "error" and uses dropout for example

Yes that is what I meant. Here we only test that the code runs, but we're not really checking that it would have been broken had we done things differently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added it!

Comment on lines 35 to 36
nn.Dropout2d,
nn.Dropout3d,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these guys have a parent, common class?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, all the Dropouts have a common parent _DropoutNd. Ill update it!

@@ -29,6 +31,8 @@
"run `loss_module.make_value_estimator(ValueEstimators.<value_fun>, gamma=val)`."
)

RANDOM_MODULE_LIST = (dropout._DropoutNd,)
Copy link
Contributor Author

@BY571 BY571 Jan 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we keep it a tuple in case we are going to extend it in the future?

test/test_cost.py Outdated Show resolved Hide resolved
test/test_cost.py Outdated Show resolved Hide resolved
@vmoens vmoens changed the title [BugFix, Featuree] Vmap randomness in losses [BugFix, Feature] Vmap randomness in losses Jan 9, 2024
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vmoens vmoens merged commit eb603ab into pytorch:main Jan 9, 2024
57 of 64 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants