-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix a scheduler preemption issue where the victim isn't properly patched, leading to preemption not functioning as expected #126644
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Mon Aug 12 17:37:05 UTC 2024. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for quick fix.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Huang-Wei, xiazhan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
LGTM label has been added. Git tree hash: 58e34b39a055e45292ac3fd14cb9a6ceff9a0fcd
|
Are bugfixes candidates for test freeze? |
I suppose we have to wait for the next set of patch releases. |
Yup, it's not a regression introduced in 1.31. Let's wait for code freeze lift-up and cherrypick it back to 1.29 to 1.31. |
The freeze is over :) |
In the release notes:
Can you be more specific? What is not functioning as expected? Does the preemption not occur at all, or does the status get wiped out in an unpredictable way? |
It's preemption not occur at all as the faulty patch operation would abort the whole scheduling cycle to return Error. Reworded the release notes. |
That sounds like a major problem, can you prepare cherry-picks? |
We also need a fix for 1.28 https://github.com/kubernetes/kubernetes/blob/release-1.28/pkg/scheduler/framework/preemption/preemption.go#L365, which hasn't reached EoL, and was broken by #121379 |
Yup, creating now. |
Oops, let me create one for 1.28. |
…26644-upstream-release-1.31 Automated cherry pick of #126644: fix a scheduler preemption issue that victim is not patched
…26644-upstream-release-1.30 Automated cherry pick of #126644: fix a scheduler preemption issue that victim is not patched
…26644-upstream-release-1.29 Automated cherry pick of #126644: fix a scheduler preemption issue that victim is not patched
…26644-upstream-release-1.28 Automated cherry pick of #126644: fix a scheduler preemption issue that victim is not patched
What type of PR is this?
/kind bug
/kind regression
What this PR does / why we need it:
Pod's status was incorrectly patched, which blocks the further deletion, and hence preemption doesn't work. It's a typo regression introduced in v1.29 in #121103.
Which issue(s) this PR fixes:
Reported by #126643
Special notes for your reviewer:
I didn't include test as 1) it's an obvious typo, and 2) in UT and integration test we don't have enforced API validation check, and hence victim would be deleted immediately after being patched, which makes it hard to verify the status.
Does this PR introduce a user-facing change?