-
Notifications
You must be signed in to change notification settings - Fork 23.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] handle aten::_set_item on Dict in convertInplaceOpsAndTrackAlias (#58317) #58696
Conversation
…as (#58317) It seems the JIT produces an output for aten::_set_item on lists but not on dicts. Previously the code would crash because it assumed it was operating on a list. The different behavior can be seen with the following test: ```python class DictModule(torch.nn.Module): def forward(self, x_in: torch.Tensor) -> typing.Dict[str, torch.Tensor]: x_out = {} x_out["test_key_out"] = x_in return x_out x_in = torch.tensor(1) dms = torch.jit.script(DictModule()) torch.onnx.export(dms, (x_in,), "/dev/null", example_outputs=(dms(x_in),)) ``` Before this change: `RuntimeError: outputs_.size() == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.h":452, please report a bug to PyTorch.` After this change: `RuntimeError: Exporting the operator prim_DictConstruct to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.` This is a more useful error message. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit b142dcf (more details on the Dr. CI page):
6 failures not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
…AndTrackAlias (#58317)" It seems the JIT produces an output for aten::_set_item on lists but not on dicts. Previously the code would crash because it assumed it was operating on a list. The different behavior can be seen with the following test: ```python class DictModule(torch.nn.Module): def forward(self, x_in: torch.Tensor) -> typing.Dict[str, torch.Tensor]: x_out = {} x_out["test_key_out"] = x_in return x_out x_in = torch.tensor(1) dms = torch.jit.script(DictModule()) torch.onnx.export(dms, (x_in,), "/dev/null", example_outputs=(dms(x_in),)) ``` Before this change: `RuntimeError: outputs_.size() == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.h":452, please report a bug to PyTorch.` After this change: `RuntimeError: Exporting the operator prim_DictConstruct to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.` This is a more useful error message. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> [ghstack-poisoned]
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
…AndTrackAlias (#58317)" It seems the JIT produces an output for aten::_set_item on lists but not on dicts. Previously the code would crash because it assumed it was operating on a list. The different behavior can be seen with the following test: ```python class DictModule(torch.nn.Module): def forward(self, x_in: torch.Tensor) -> typing.Dict[str, torch.Tensor]: x_out = {} x_out["test_key_out"] = x_in return x_out x_in = torch.tensor(1) dms = torch.jit.script(DictModule()) torch.onnx.export(dms, (x_in,), "/dev/null", example_outputs=(dms(x_in),)) ``` Before this change: `RuntimeError: outputs_.size() == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.h":452, please report a bug to PyTorch.` After this change: `RuntimeError: Exporting the operator prim_DictConstruct to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.` This is a more useful error message. Co-authored-by: Gary Miguel <garymiguel@microsoft.com> Differential Revision: [D28714804](https://our.internmc.facebook.com/intern/diff/D28714804) [ghstack-poisoned]
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@SplitInfinity merged this pull request in 1aabb8f. |
…as (pytorch#58317) (pytorch#58696) Summary: Pull Request resolved: pytorch#58696 It seems the JIT produces an output for aten::_set_item on lists but not on dicts. Previously the code would crash because it assumed it was operating on a list. The different behavior can be seen with the following test: ```python class DictModule(torch.nn.Module): def forward(self, x_in: torch.Tensor) -> typing.Dict[str, torch.Tensor]: x_out = {} x_out["test_key_out"] = x_in return x_out x_in = torch.tensor(1) dms = torch.jit.script(DictModule()) torch.onnx.export(dms, (x_in,), "/dev/null", example_outputs=(dms(x_in),)) ``` Before this change: `RuntimeError: outputs_.size() == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.h":452, please report a bug to PyTorch.` After this change: `RuntimeError: Exporting the operator prim_DictConstruct to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.` This is a more useful error message. Test Plan: Imported from OSS Reviewed By: driazati Differential Revision: D28714804 Pulled By: SplitInfinity fbshipit-source-id: 1e5dc5fb44d1e3f971a22a79b5cf009d7590bf84 Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Stack from ghstack:
It seems the JIT produces an output for aten::_set_item on lists but
not on dicts. Previously the code would crash because it assumed it
was operating on a list.
The different behavior can be seen with the following test:
Before this change:
RuntimeError: outputs_.size() == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.h":452, please report a bug to PyTorch.
After this change:
RuntimeError: Exporting the operator prim_DictConstruct to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
This is a more useful error message.
Co-authored-by: Gary Miguel garymiguel@microsoft.com
Differential Revision: D28714804