-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconcile extended resource capacity after kubelet restart. #64784
Conversation
/sig node |
/assign @vishh |
/cc @ConnorDoyle please let us know asap if you have any concerns on this change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be a safe workaround to the problem described in #64632, especially with additional context provided by @vishh here.
Is this true? With this patch, pods that consume extended resources can no longer survive a Kubelet restart because they will fail admission. If so, could we add that to the release note?
requiresUpdate := false | ||
for k := range node.Status.Capacity { | ||
if v1helper.IsExtendedResourceName(k) { | ||
node.Status.Capacity[k] = *resource.NewQuantity(int64(0), resource.DecimalSI) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sanity's sake, should this also set that resource's allocatable value to zero? Otherwise this could temporarily set capacity such that allocatable > capacity
. As is, allocatable would be overwritten to zero on the next kubelet sync iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@ConnorDoyle Thanks a lot for the comment! I updated the release note to clarify the pod failure behavior associated with the change. PTAL. |
/test pull-kubernetes-local-e2e-containerized |
{ | ||
name: "no update needed without extended resource", | ||
existingNode: &v1.Node{ | ||
Status: v1.NodeStatus{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this test check Allocatable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah my bad. I did extend the test to check allocatable but forgot to push the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you just updated it!
@jiayingz what will be the behavior after this PR for trivial kubelet restarts? Will the kubelet update capacity/allocatable even though it has a valid checkpoint? |
For device plugin resource, even with a valid checkpoint, we already sets the resource capacity/allocatable to zero since #60856 to make sure no new pods will get assigned to the node till the device plugin re-connects. Existing pods already assigned with the resource can continue though with the valid checkpoint in place. That is the logic covered from device manager Allocate(). |
/lgtm
/approve
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jiayingz, vishh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind bug |
[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process Pull Request Labels
|
Automatic merge from submit-queue (batch tested with PRs 63717, 64646, 64792, 64784, 64800). If you want to cherry-pick this change to another branch, please follow the instructions here. |
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #64632
Special notes for your reviewer:
Release note: