-
Notifications
You must be signed in to change notification settings - Fork 23.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support symbolic for conv_tbc (#58359) #58692
Conversation
This is a fix for exporting fairseq models, see: ```python model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt') model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt') ``` With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met. The original PR #57708 has merging issue, use this one instead. Co-authored-by: David <jiafa@microsoft.com> [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit a00da05 (more details on the Dr. CI page):
3 failures not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This is a fix for exporting fairseq models, see: ```python model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt') model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt') ``` With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met. The original PR #57708 has merging issue, use this one instead. Co-authored-by: David <jiafa@microsoft.com> [ghstack-poisoned]
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
This is a fix for exporting fairseq models, see: ```python model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt') model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt') ``` With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met. The original PR #57708 has merging issue, use this one instead. Co-authored-by: David <jiafa@microsoft.com> Differential Revision: [D28714809](https://our.internmc.facebook.com/intern/diff/D28714809) [ghstack-poisoned]
@SplitInfinity has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@SplitInfinity merged this pull request in b8c96e6. |
Summary: Pull Request resolved: pytorch#58692 This is a fix for exporting fairseq models, see: ```python model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt') model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt') ``` With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met. The original PR pytorch#57708 has merging issue, use this one instead. Test Plan: Imported from OSS Reviewed By: driazati Differential Revision: D28714809 Pulled By: SplitInfinity fbshipit-source-id: 71c2de6cec7ee05af68560996acf47d97af46fb2 Co-authored-by: David <jiafa@microsoft.com>
Stack from ghstack:
This is a fix for exporting fairseq models, see:
With this fix, and comment out model script one line
GradMultiply
, these two models can be exported successfully with perf met.The original PR #57708 has merging issue, use this one instead.
Co-authored-by: David jiafa@microsoft.com
Differential Revision: D28714809