-
Notifications
You must be signed in to change notification settings - Fork 23.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve docs around Math/DefaultBackend & add PythonDispatcher class. #50854
Conversation
[ghstack-poisoned]
ghstack-source-id: f5bdd31e75de4f2050d807e06f1d2c05d8350b6d Pull Request resolved: #50854
Example output of python_dispatcher.py
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Toy is a little bit of a misnomer, because you are actually using the real live production dispatcher (similarly, there's no risk of the implementation of getting out of date.)
…cher class." [ghstack-poisoned]
…cher class." Differential Revision: [D26008542](https://our.internmc.facebook.com/intern/diff/D26008542) [ghstack-poisoned]
ghstack-source-id: 419ec6bb888c756cd8b831c211c18d276fc4e3b4 Pull Request resolved: #50854
…cher class." Differential Revision: [D26008542](https://our.internmc.facebook.com/intern/diff/D26008542) [ghstack-poisoned]
ghstack-source-id: a9d0ff14b0e884833f7a7681ae2391769bd53f40 Pull Request resolved: #50854
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work - it's especially cool to have the python thing running off the actual dispatcher!
Got a decent pile of suggested edits in here but nothing huge. Definitely ping if any of them don't make sense or seem like I've missed the point 😁
aten/src/ATen/native/README.md
Outdated
so that it's only used in inference. | ||
Later for autograd, depending on whether your autograd kernel works for all backends or not, | ||
you can put them in alias `Autograd` or specific keys like `AutogradCPU`. | ||
|
||
If you add `dispatch` section to any API that didn't have it before, you **have to** move | ||
the old implementation to `Math` field so that it's still available for other backends to use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to clarify what we're telling them to do here. "move the old implementation to the Math
field" means specifying a kernel name that wasn't there before, right? The name we generate is mostly identical to the op name, but e.g. we need to tell what to do about overload names (I think we just always put an underscore between the base name and the overload name, but I'm not sure if there are exceptions, and also I'm not 100% sure we just pass through the casing) and whatever other name mangling we do.
Aside: I notice there are some ops in native_functions.yaml
that only specify Math
kernels right now, with the same name as the op - random examples: broadcast_to
, row_stack
. Are those entries necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No they're not necessary. No dispatch section is equivalent with having only a Math kernel. I've updated the PR to describe it more precisely here, let's me know if this works! :D
…cher class." Differential Revision: [D26008542](https://our.internmc.facebook.com/intern/diff/D26008542) [ghstack-poisoned]
ghstack-source-id: 2d17f9048b9f7cd139d71cc5264b9b887283cad3 Pull Request resolved: #50854
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome. Couple followup edits, main one being because of a confusing suggestion of mine in the earlier review, sorry about that 😛
aten/src/ATen/native/README.md
Outdated
|
||
If you add backend specific kernel to any operator that only had `Math` kernel before(no dispatch section | ||
or only `Math` entry in dispatch section), you **have to** move the old implementation to a `<op>_math` | ||
kernel and put it in `Math` field so that it's still available for other backends to use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we actually require a name change though, right? Sorry about my confusing earlier comment btw - I was thinking about telling them how to reconstruct the name of the old implementation, but of course they can just look it up. How about this for the whole paragraph:
Important: because a Math
kernel is implicitly registered for ops with no dispatch:
section, when you add a backend-specific kernel (and hence a dispatch:
section) to one of these, you must also
add a Math:
entry that names the old kernel implementation (it's named after the op, with _<overload name>
added if applicable), so that it's still available for other backends to use.
(I was going to put something about adding a _math
suffix for clarity, but I just checked native_functions and we don't actually do that! Most Math:
kernels are just the op name, with a couple that put math_
in the front. So maybe we shouldn't suggest anything in the docs, and just let people take their cues from what they find in native_functions. (If you think it's important for clarity we could say something here, but then we should codemod to make native_functions match, I'd say)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM! Thanks for the suggestion!
…cher class." Differential Revision: [D26008542](https://our.internmc.facebook.com/intern/diff/D26008542) [ghstack-poisoned]
ghstack-source-id: e48ff49178a534c871986cd66d4ac7c31a033e81 Pull Request resolved: #50854
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😍
Codecov Report
@@ Coverage Diff @@
## gh/ailzhang/43/base #50854 +/- ##
=======================================================
- Coverage 80.92% 80.92% -0.01%
=======================================================
Files 1926 1927 +1
Lines 210030 210085 +55
=======================================================
+ Hits 169967 170010 +43
- Misses 40063 40075 +12 |
Stack from ghstack:
Differential Revision: D26008542