Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Added support for TensorDictSequence module subsampling #332

Merged
merged 12 commits into from
Aug 10, 2022

Conversation

nicolas-dufour
Copy link
Contributor

@nicolas-dufour nicolas-dufour commented Jul 28, 2022

Description

This PR adds the possibility of retrieving a consistent sub-network from a TensorDictSequence that is restricted to a given set of inputs and outputs. Only the blocks that are involved in the computation between input and output will be called.

Motivation and Context

The tracing capabilities of TensorDictSequence make it possible to look at restricted sub-networks with little effort. This allows for faster code execution (compared to running full modules where only part of the output is needed) and greater readability.

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 28, 2022
@nicolas-dufour nicolas-dufour requested a review from vmoens August 1, 2022 09:51
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to make this scriptable, I would rather have a seprate method that would return a sub-TensorDictSequence:

seq = TensorDictSequence(...)
sub_seq = seq.select(in_keys_filter=..., out_keys_filter=...)

Would that be doable?

@nicolas-dufour
Copy link
Contributor Author

nicolas-dufour commented Aug 5, 2022

@vmoens won't this be a problem to create new sub_seq to frequently. Typically for Dreamer, this would require to access the subsequence after every training step. So if we create a subsequence each time this will be bad for performance no?
The advantage of the current method is to offer different computation paths will keeping the weights structure fixed

@vmoens
Copy link
Contributor

vmoens commented Aug 5, 2022

Can't you reuse it? What would change between two iterations?

@nicolas-dufour
Copy link
Contributor Author

The weights of the main sequence will have been updated so we need to update the weights of the sub sequence

@vmoens
Copy link
Contributor

vmoens commented Aug 5, 2022

Not if they're the same weights
Updates are done in place
If that's an issue you can always work in functional mode 😊

@nicolas-dufour
Copy link
Contributor Author

I made the proposed changed and works as expected thanks for the suggestion

@vmoens vmoens changed the title [Feature]: Added support for TDSequence module subsampling [Feature]: Added support for TensorDictSequence module subsampling Aug 5, 2022
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we want to select the in_keys?

torchrl/modules/tensordict_module/sequence.py Outdated Show resolved Hide resolved
out_keys = deepcopy(self.out_keys)
id_to_keep = set([i for i in range(len(self.module))])
for i, module in enumerate(self.module):
if all(key in in_keys for key in module.in_keys):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the usage of selecting in_keys? I can understand why we want to restrict the outputs, but I don't really see when we want to restrict inputs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also what happens if you say you want some out_keys but they conflict with the in_keys? Is the sequence going to be empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to be able to select the in_keys to be able to directly input an intermediarry block. For example imagine you have a hidden layer that you want to inject from a precomputed tensordict, this allows to do so.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it would be empty if your out keys are before the in_keys

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it
We're doing something similar in #352: you can input an incomplete tensordict and only the relevant ops will be executed. I wonder if we need both ways of doing the same thing. The advantage of your implementation is that it is self-consistent though.

torchrl/modules/tensordict_module/sequence.py Show resolved Hide resolved
@vmoens vmoens added the enhancement New feature or request label Aug 8, 2022
@vmoens
Copy link
Contributor

vmoens commented Aug 10, 2022

I see that the tests are failing, have you tried merging main into this branch?
Then I think we're ready to go if the tests pass

@vmoens vmoens merged commit 69e7948 into pytorch:main Aug 10, 2022
@nicolas-dufour nicolas-dufour deleted the keys_overide_tdsequence branch September 16, 2022 16:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants