-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added an option to use the pure python implementation. #1137
Added an option to use the pure python implementation. #1137
Conversation
Bazel is really not easy for python devs.
I can do:
and it works. Is there a flag to make bazel respect basic python rules? This is really fustrating to spend so much time on this PR just to add a single python file. |
@gabrieldemarmiesse yeah, that happens with bazel due to the way it resolves path imports. Haven't found a fix for it yet. |
This comment has been minimized.
This comment has been minimized.
@googlebot I consent. |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
@@ -16,6 +16,7 @@ py_library( | |||
"tanhshrink.py", | |||
], | |||
data = [ | |||
"//tensorflow_addons:options.py", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that is this painful and unituitive. Sorry for any wasted time. We could raise this with the bazel team, but I believe this is fundamental to how this resolving works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's linked to how bazel works and that makes sense for monorepos like tensorflow. I'm not convinced the added value of making the wheels + running the tests with bazel outweight the costs in addons. It makes totally sense to use it to build the SO though. I know we had a similar conversation a while ago, maybe I'll open an issue so that we can discuss it more. Anyway thanks for the fix! It helps a lot!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this can be discussed further. Typical contributions to Addons will/should fall into a subpackage that should have the BUILD already setup. This PR is a change that affects the entirety of the project so it's a bit more complicated. I agree we should encourage contributors to make these types of changes but we can look at and cons again to see if its worth splitting the C++ test/builds from python tests/builds.
This should be doable since it throws a python error instead of a crash.
Good idea, but not sure this is always feasible so maybe dependant on the subpackage. For example, image kernels may be hard to reproduce without depending on OpenCV or something.
Agree, but I think macos users are out of luck unless they're using a ROCm supported TF. AFAIK there hasn't been an NVIDIA GPU on macos system for a long time. I could be wrong on the current state of this: |
I'll implement the fallback in this PR. We can change the CONTRIBUTING in another one There seem to be a lot of activity around ROCm in the tensorflow/tensorflow repo. Do you guys know what's the status? It'd be cool if TF were to support AMD officially :) |
Here is the warning message:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very close to LGTM. Thanks for the effort on this. Few comments, I think we need to clarify that this is not entirely a GPU issue. The slowdown will occur for not loading CPU ops as well.
@@ -16,6 +16,7 @@ py_library( | |||
"tanhshrink.py", | |||
], | |||
data = [ | |||
"//tensorflow_addons:options.py", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this can be discussed further. Typical contributions to Addons will/should fall into a subpackage that should have the BUILD already setup. This PR is a change that affects the entirety of the project so it's a bit more complicated. I agree we should encourage contributors to make these types of changes but we can look at and cons again to see if its worth splitting the C++ test/builds from python tests/builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost LGTM. Minor edits.
One last question I have is the implications of serializing and deserializing these ops. For example if someone trained a model with custom-ops and then someone else loaded using py_ops what guarantees do we have that they behave the same? I suppose as long as we have test cases ensuring they're the same?
Co-Authored-By: Sean Morgan <seanmorgan91@gmail.com>
Co-Authored-By: Sean Morgan <seanmorgan91@gmail.com>
Co-Authored-By: Sean Morgan <seanmorgan91@gmail.com>
I'm unsure about serialization. I think we need another opinion on this one. |
@facaiy @seanpmorgan @tensorflow/sig-addons-maintainers You are owners of some files modified in this pull request. |
Sorry about the spam. My bot is training. |
That's the only way I can see it work. Also, in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM Thanks!
* Added an option to use the pure python implementation.
See #1114
What is left to do:
fallback to the pure python implementation and print a warning
explaining the TF_ADDONS_PY_OPS and addons versions compatible with
the current tf.
May also flip the TF_ADDONS_PY_OPS switch automatically?
I think macos and windows users will be happy to use the activations functions on gpu :)
I also believe that the description of TF addons discouraged Windows and MacOS users to use addons when they want to use the gpu too. That's not good, because the majority of the components of addons work on windows and mac with a gpu.