-
Notifications
You must be signed in to change notification settings - Fork 334
Conversation
This pull request was exported from Phabricator. Differential Revision: D25383305 |
6206fbe
to
0bb903c
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
0bb903c
to
2df0eac
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
2df0eac
to
26c7167
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D25383305 |
84339b7
to
6732052
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
6732052
to
51e4270
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
51e4270
to
c9a0953
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
c9a0953
to
7537989
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
7537989
to
1cdee07
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
Summary: Pull Request resolved: facebookresearch/vissl#102 Pull Request resolved: facebookresearch/ClassyVision#666 Add Pytorch AMP support. A follow up for FairScale would be to add ShardedGradScaler, so that we support mixed precision with ShardedDDP and ShardedOptimizer Reviewed By: mannatsingh Differential Revision: D25383305 fbshipit-source-id: 251b031dd1fe13329301b2bb221987266cf0e6d9
1cdee07
to
162b70f
Compare
This pull request was exported from Phabricator. Differential Revision: D25383305 |
Summary: Pull Request resolved: #102 Pull Request resolved: facebookresearch/ClassyVision#666 Add Pytorch AMP support. A follow up for FairScale would be to add ShardedGradScaler, so that we support mixed precision with ShardedDDP and ShardedOptimizer Reviewed By: mannatsingh, prigoyal Differential Revision: D25383305 fbshipit-source-id: fe3be9c850d4aa6e32c48144b04b42832eaa67f8
Summary: Pull Request resolved: facebookresearch/vissl#102 Pull Request resolved: #666 Add Pytorch AMP support. A follow up for FairScale would be to add ShardedGradScaler, so that we support mixed precision with ShardedDDP and ShardedOptimizer Reviewed By: mannatsingh, prigoyal Differential Revision: D25383305 fbshipit-source-id: fe3be9c850d4aa6e32c48144b04b42832eaa67f8
Summary: Pull Request resolved: #102 Pull Request resolved: facebookresearch/ClassyVision#666 Add Pytorch AMP support. A follow up for FairScale would be to add ShardedGradScaler, so that we support mixed precision with ShardedDDP and ShardedOptimizer Reviewed By: mannatsingh, prigoyal Differential Revision: D25383305 fbshipit-source-id: fe3be9c850d4aa6e32c48144b04b42832eaa67f8
Summary: A first test, to ensure that RegNetV2 and RegNetFSDP have the same loss curves in fp32 in the context of SWAV. Pull Request resolved: fairinternal/ssl_scaling#102 Reviewed By: prigoyal Differential Revision: D27881966 Pulled By: QuentinDuval fbshipit-source-id: eaadefcb56e977977087045e4971432ddec39b1b
Summary: Add Pytorch AMP support. A follow up for FairScale would be to add ShardedGradScaler, so that we support mixed precision with ShardedDDP and ShardedOptimizer
Differential Revision: D25383305