Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds DLPack support #57110
Adds DLPack support #57110
Changes from 1 commit
a72a579
56af6f3
aa107ef
6aaacd6
86dfedd
68d35ef
70bc04b
df70f26
a73fb64
7134d76
27b9639
b827087
8d60bbe
2059638
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a test that compares the DLPack semantics with our .numpy() and from_numpy() semantics. If/when NumPy implements the protocol we could really validate that the behavior is the same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't understand what should be compared here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, I think I understood it,
A test that checks that tensors from dlpack can't be resized, or tensors with gradients can't be exported in the same sense that numpy does, is this correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes: tensors with gradients or the conjugate bit set, I think, can't be exported.
For importing we should verify that the underlying memory is shared by writing to it on both CPU and CUDA.
NumPy arrays can also be non-writable, which we check for on import. I'm not sure what (if any) special properties DLPack capsules have that PyTorch can't emulate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add (xref dmlc/dlpack#75) - adding
kDLBool
is being discussed there.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first 6 lines of each test are the same, maybe worth leaving the comment only in the first test, and shortening the rest to:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
much better! thanks :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need a synchronize call here to ensure
x
has been populated on a CUDA device before it's accessed on a different stream?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we want to test how the dlpack interface does this synchronization, so we avoid doing it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to clarify this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still has TODO's. I think it would be nice if this returned
Tuple[enum.IntEnum, int]
as in the spec: https://data-apis.org/array-api/latest/API_specification/array_object.html#dlpack-device-selfThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit out-of-scope but if we were to support these other devices, how would the stream support work?
Should it be ignored in environments where a stream does not make any sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what @rgommers meant is to change the return type of this function:
For
__dlpack_device__
whether a device has the concept of stream/queue doesn't matter. For__dlpack__
stream can beAny
:https://data-apis.org/array-api/latest/API_specification/array_object.html#dlpack-self-stream-none
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Converts a DLPack capsule or object implementing the DLPack protocol into a tensor."
The phrase "DLPack capsule" should be a link to the DLPack documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment still needs to be resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no such thing yet. DLPack itself only has comments in dlpack.h.
__dlpack__
is part of the array API standard, see here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine with adding the link to the API standard, would it be possible to do it in another PR?
Thanks!