Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support batching variable size tensors using nested tensors #219

Open
mosheraboh opened this issue Nov 22, 2022 · 3 comments
Open

Support batching variable size tensors using nested tensors #219

mosheraboh opened this issue Nov 22, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@mosheraboh
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
Support batching variable size tensors using nested tensors (https://pytorch.org/tutorials/prototype/nestedtensor.html)
To avoid padding and improve the running time.

Describe the solution you'd like
Add such an option in CollateDefault as an alternative to CollateDefault.pad_all_tensors_to_same_size.

Describe alternatives you've considered
N/A

Additional context
N/A

@mosheraboh mosheraboh added the enhancement New feature or request label Nov 22, 2022
@SagiPolaczek
Copy link
Collaborator

Note that currently there is a user warning (torch 1.13.0):

UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future.

@YoelShoshan
Copy link
Collaborator

if/when it works well it could be a very nice GRAM saver

@SagiPolaczek
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants