Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve docs around Math/DefaultBackend & add PythonDispatcher class. #50854

Closed
wants to merge 6 commits into from

Conversation

ailzhang
Copy link
Contributor

@ailzhang ailzhang commented Jan 20, 2021

Stack from ghstack:

Differential Revision: D26008542

ailzhang pushed a commit that referenced this pull request Jan 20, 2021
ghstack-source-id: f5bdd31e75de4f2050d807e06f1d2c05d8350b6d
Pull Request resolved: #50854
@ailzhang
Copy link
Contributor Author

Example output of python_dispatcher.py

# toy.register(["CPU", "Math", "AutogradCPU"])
 λ ~/dev-pytorch/tools toydispatcher python python_dispatcher.py

Computed Dispatch Table
key             kernel
---------------------------
CPU             fn_CPU [kernel]
XLA             fn_Math [math kernel]
QuantizedCPU    fn_Math [math kernel]
AutogradOther   fn_Math [math kernel]
AutogradCPU     fn_AutogradCPU [kernel]
AutogradXLA     fn_Math [math kernel]

# toy.register(["CPU", "QuantizedCPU", "Math", "AutogradCPU"])
 λ ~/dev-pytorch/tools toydispatcher python python_dispatcher.py

Computed Dispatch Table
key             kernel
---------------------------
CPU             fn_CPU [kernel]
XLA             fn_Math [math kernel]
QuantizedCPU    fn_QuantizedCPU [kernel]
AutogradOther   ambiguous_autogradother [ambiguous autogradother]
AutogradCPU     fn_AutogradCPU [kernel]
AutogradXLA     fn_Math [math kernel]

tools/python_dispatcher.py Outdated Show resolved Hide resolved
Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Toy is a little bit of a misnomer, because you are actually using the real live production dispatcher (similarly, there's no risk of the implementation of getting out of date.)

@ailzhang ailzhang requested a review from smessmer January 21, 2021 22:20
ailzhang pushed a commit that referenced this pull request Jan 21, 2021
ghstack-source-id: 419ec6bb888c756cd8b831c211c18d276fc4e3b4
Pull Request resolved: #50854
ailzhang pushed a commit that referenced this pull request Jan 22, 2021
ghstack-source-id: a9d0ff14b0e884833f7a7681ae2391769bd53f40
Pull Request resolved: #50854
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work - it's especially cool to have the python thing running off the actual dispatcher!

Got a decent pile of suggested edits in here but nothing huge. Definitely ping if any of them don't make sense or seem like I've missed the point 😁

aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
aten/src/ATen/native/README.md Show resolved Hide resolved
so that it's only used in inference.
Later for autograd, depending on whether your autograd kernel works for all backends or not,
you can put them in alias `Autograd` or specific keys like `AutogradCPU`.

If you add `dispatch` section to any API that didn't have it before, you **have to** move
the old implementation to `Math` field so that it's still available for other backends to use.
Copy link

@bhosmer bhosmer Jan 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to clarify what we're telling them to do here. "move the old implementation to the Math field" means specifying a kernel name that wasn't there before, right? The name we generate is mostly identical to the op name, but e.g. we need to tell what to do about overload names (I think we just always put an underscore between the base name and the overload name, but I'm not sure if there are exceptions, and also I'm not 100% sure we just pass through the casing) and whatever other name mangling we do.

Aside: I notice there are some ops in native_functions.yaml that only specify Math kernels right now, with the same name as the op - random examples: broadcast_to, row_stack. Are those entries necessary?

Copy link
Contributor Author

@ailzhang ailzhang Jan 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No they're not necessary. No dispatch section is equivalent with having only a Math kernel. I've updated the PR to describe it more precisely here, let's me know if this works! :D

aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
torch/_python_dispatcher.py Outdated Show resolved Hide resolved
torch/_python_dispatcher.py Outdated Show resolved Hide resolved
torch/_python_dispatcher.py Outdated Show resolved Hide resolved
ailzhang pushed a commit that referenced this pull request Jan 25, 2021
ghstack-source-id: 2d17f9048b9f7cd139d71cc5264b9b887283cad3
Pull Request resolved: #50854
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome. Couple followup edits, main one being because of a confusing suggestion of mine in the earlier review, sorry about that 😛

aten/src/ATen/native/README.md Outdated Show resolved Hide resolved
aten/src/ATen/native/README.md Outdated Show resolved Hide resolved

If you add backend specific kernel to any operator that only had `Math` kernel before(no dispatch section
or only `Math` entry in dispatch section), you **have to** move the old implementation to a `<op>_math`
kernel and put it in `Math` field so that it's still available for other backends to use.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we actually require a name change though, right? Sorry about my confusing earlier comment btw - I was thinking about telling them how to reconstruct the name of the old implementation, but of course they can just look it up. How about this for the whole paragraph:


Important: because a Math kernel is implicitly registered for ops with no dispatch: section, when you add a backend-specific kernel (and hence a dispatch: section) to one of these, you must also
add a Math: entry that names the old kernel implementation (it's named after the op, with _<overload name> added if applicable), so that it's still available for other backends to use.


(I was going to put something about adding a _math suffix for clarity, but I just checked native_functions and we don't actually do that! Most Math: kernels are just the op name, with a couple that put math_ in the front. So maybe we shouldn't suggest anything in the docs, and just let people take their cues from what they find in native_functions. (If you think it's important for clarity we could say something here, but then we should codemod to make native_functions match, I'd say)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM! Thanks for the suggestion!

ailzhang pushed a commit that referenced this pull request Jan 25, 2021
ghstack-source-id: e48ff49178a534c871986cd66d4ac7c31a033e81
Pull Request resolved: #50854
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍

@codecov
Copy link

codecov bot commented Jan 26, 2021

Codecov Report

Merging #50854 (fe9ddd4) into gh/ailzhang/43/base (28869d5) will decrease coverage by 0.00%.
The diff coverage is 96.36%.

@@                   Coverage Diff                   @@
##           gh/ailzhang/43/base   #50854      +/-   ##
=======================================================
- Coverage                80.92%   80.92%   -0.01%     
=======================================================
  Files                     1926     1927       +1     
  Lines                   210030   210085      +55     
=======================================================
+ Hits                    169967   170010      +43     
- Misses                   40063    40075      +12     

@facebook-github-bot
Copy link
Contributor

@ailzhang merged this pull request in a51b9a8.

@facebook-github-bot facebook-github-bot deleted the gh/ailzhang/43/head branch January 29, 2021 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants