-
Notifications
You must be signed in to change notification settings - Fork 74.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for sparse_reduce_max and sparse_reduce_max_sparse #10056
Conversation
Can one of the admins verify this patch? |
@yongtang I was imagining the code for |
3db0b00
to
d509e10
Compare
Thanks @concretevitamin for the review. The PR has been updated so that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Left a few comments.
@@ -195,7 +209,7 @@ class SparseReduceSumOp : public OpKernel { | |||
// g.group() provides the coordinates of a particular reduced value. | |||
sp.Reorder<T>(reduction.reorder_dims); | |||
for (const auto &g : sp.group(reduction.group_by_dims)) { | |||
group_sum.device(ctx->eigen_cpu_device()) = g.template values<T>().sum(); | |||
Op::template Run<T>(ctx, group_sum, g.template values<T>()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename group_sum
? Something like reduced_val
or something better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @concretevitamin. The PR has been updated.
@@ -265,7 +286,7 @@ class SparseReduceSumSparseOp : public OpKernel { | |||
auto group_sum = tmp_group_sum.scalar<T>(); | |||
int64 i = 0; | |||
for (const auto &g : sp.group(reduction.group_by_dims)) { | |||
group_sum.device(ctx->eigen_cpu_device()) = g.template values<T>().sum(); | |||
Op::template Run<T>(ctx, group_sum, g.template values<T>()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -0,0 +1,125 @@ | |||
/* Copyright 2016 The TensorFlow Authors. All Rights Reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer if you could leave out this file. The reason we originally had a C++ test file for SparseReduceSum
was to ease debugging. Since now most of the kernels share the same logic, this is no longer needed. The functional correctness of the new op can be covered by the more lightweight Python unit tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Done.
@@ -510,10 +510,86 @@ def testNoEmptyRows(self): | |||
self.assertAllEqual(empty_row_indicator_out, np.zeros(2).astype(np.bool)) | |||
|
|||
|
|||
class SparseReduceMaxTest(test_util.TensorFlowTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you refactor the existing SparseReduceSumTest
to plumb through an extra bool argument, such as do_sum == True / False
, so that we can share most of the existing test logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Done.
9fa4189
to
d23fd87
Compare
@concretevitamin Thanks for the review. The PR has been updated. Please take.a look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Two final nits. Please address but I'm approving first ;)
static void Run(OpKernelContext *ctx, typename TTypes<T>::Scalar &s, const typename TTypes<T>::UnalignedVec &v) { | ||
s.device(ctx->eigen_cpu_device()) = v.sum(); | ||
} | ||
static const char * Name() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do
StringPiece Name() {
return "sum";
}
which is more succinct (and potentially safer).
static void Run(OpKernelContext *ctx, typename TTypes<T>::Scalar &s, const typename TTypes<T>::UnalignedVec &v) { | ||
s.device(ctx->eigen_cpu_device()) = v.maximum(); | ||
} | ||
static const char * Name() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
d23fd87
to
cdd89e2
Compare
Thanks @concretevitamin. The PR has been updated. |
Jenkins, test this please. @andrewharp feel free to merge. |
This change will require API review. |
Fine for API review. |
Can you rebase and fix the conflicts? I cannot fix things in this PR. |
cdd89e2
to
9f4c38a
Compare
Thanks @martinwicke. The PR has been rebased. Please take a look |
Jenkins, test this please. |
Can you please run the API goldens updater as described in the error message of the failing Linux-CPU test? |
This commit tries to address the issue raised in 10002 to have the support for sparse_reduce_max and sparse_reduce_max_sparse. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This commit moves sparse_reduce_sum_op.cc to sparse_reduce_op.cc so that both `sparse_reduce_sum` and `sparse_reduce_max` share the same code base. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Add test cases for handling nested tuples (given both as inputs and outputs) to CPU custom call typed FFI API tests. Implements #10056. PiperOrigin-RevId: 627008898
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Add test cases for handling nested tuples (given both as inputs and outputs) to CPU custom call typed FFI API tests. Implements #10056. PiperOrigin-RevId: 627008898
Add test cases for handling nested tuples (given both as inputs and outputs) to CPU custom call typed FFI API tests. Implements #10056. PiperOrigin-RevId: 627368299
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
…d FFI Part of #10056 Co-authored-by: pparuzel paruzelp@google.com Co-authored-by: Adam-Banas adambanas@google.com PiperOrigin-RevId: 629372087
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
…d FFI Part of #10056 Co-authored-by: pparuzel paruzelp@google.com Co-authored-by: Adam-Banas adambanas@google.com PiperOrigin-RevId: 629372087
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 615387700
Fixes #10056 Co-authored-by: pparuzel <paruzelp@google.com> Co-authored-by: Adam-Banas <adambanas@google.com> PiperOrigin-RevId: 630339013
This commit tries to address the issue raised in #10002 to have
the support for sparse_reduce_max and sparse_reduce_max_sparse.
Signed-off-by: Yong Tang yong.tang.github@outlook.com