-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix Pytorjob status inaccuracy when task replica scale down #1593
Conversation
Hi @PeterChg. Thanks for your PR. I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @Jeffwan |
Pull Request Test Coverage Report for Build 2473439800
💛 - Coveralls |
/ok-to-test |
Could you please explain why the PyTorch operator needs such a new check? |
when HPA scale down task replicas of pytorchjob, pytorchjob operator terminate redundant pods. the phase of pods will become Failed Before disappears, This will result in a failed pytorchjob state. So we need to ignore the errors status caused by proactive pod terminate. |
SGTM |
/cc @Jeffwan Excuse me, is there something wrong with kubeflow-training-operator-presubmit test environment configuration. 2022-05-19 02:07:33 [✖] AWS::EC2::RouteTable/PrivateRouteTableUSWEST2B: CREATE_FAILED – "Resource creation cancelle |
/retest |
@PeterChg: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Can you rebase |
/retest |
done |
What about other frameworks? /cc @gaocegege |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zw0610 Should we make the change in MPIJob?
@gaocegege I believe so. |
This bug occurs when the number of pods is proactive expanded or shrunk, like the HPA scenario. The only situation available is pytorchjob. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gaocegege, PeterChg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
Could we merge this PR? Thanks for your contribution! 🎉 👍 |
Does this pull request assume all PyTorchJob is elastic? I'm wondering the expected behavior for traditional PyTorchJob if the replicas is scaled down. |
traditional PyTorchJob will not Encounter a scene like this |
I will submit another pull request |
Great! In that case, I think we can merge this pull request. @gaocegege |
@gaocegege |
Can you rebase and resolve merge conflicts? |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks for your contribution! 🎉 👍
Excuse me, Is this label will affert final mergeable? |
/hold cancel Sorry, my bad! |
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
Fixes #<issue number>, #<issue number>, ...
format, will close the issue(s) when PR gets merged):Fixes #
Checklist: