-
Notifications
You must be signed in to change notification settings - Fork 23.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] enable some RNN tests in scripting mode #57082
Conversation
Previously they were all disabled, even though many of them actually pass today.
💊 CI failures summary and remediationsAs of commit 455d1ee (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages: pytorch_linux_xenial_py3_6_gcc5_4_jit_legacy_test (1/3)Step: "Run tests" (full log | diagnosis details | 🔁 rerun)
|
@skipIfUnsupportedMinOpsetVersion(9) | ||
def f(self): | ||
self.is_script_test_enabled = is_script_test_enabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this PR still needed after #57564 ?
Not needed after #57564 |
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> [ghstack-poisoned]
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> [ghstack-poisoned]
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> [ghstack-poisoned]
Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> Differential Revision: [D28714805](https://our.internmc.facebook.com/intern/diff/D28714805) [ghstack-poisoned]
Summary: Pull Request resolved: #58691 Note the first commit in this PR has its own pull request here since it seemed self-contained: #57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Test Plan: Imported from OSS Reviewed By: driazati Differential Revision: D28714805 Pulled By: SplitInfinity fbshipit-source-id: f19647a04533d9ec76399a8793b3f712ea0337d2 Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Summary: Pull Request resolved: pytorch#58691 Note the first commit in this PR has its own pull request here since it seemed self-contained: pytorch#57082 * [ONNX] simplify batch_first logic in RNN tests * [ONNX] support GRU with packed input in scripting mode This required two changes: * Add as_tensor to symbolic_opset9.py * Change torch::jit::pushPackingPastRnn to recognize and properly replace another use of the batch_sizes output of prim::PackPadded. Previously the code assumed that the first use was as input to the RNN operator. However in some cases, it is also used to compute max_batch_size. For example in this code: https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815 With these changes the GRU tests now pass in scripting mode for opset version >= 11. Test Plan: Imported from OSS Reviewed By: driazati Differential Revision: D28714805 Pulled By: SplitInfinity fbshipit-source-id: f19647a04533d9ec76399a8793b3f712ea0337d2 Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Previously they were all disabled, even though many of them actually
pass today.