-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: Don't drain and remove the current node on kubeadm reset #42713
Conversation
Sorry, my intention was not to unassign @pires, that was a race condition in the Github page |
@k8s-bot unit test this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
}, nil | ||
} | ||
|
||
// Run reverts any changes made to this host by "kubeadm init" or "kubeadm join". | ||
func (r *Reset) Run(out io.Writer) error { | ||
|
||
// Try to drain and remove the node from the cluster | ||
err := drainAndRemoveNode(r.removeNode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would you want to mark it unschedulable or taint it instead? seems like it should do something to signal that no more pods should be sent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could do it, but I think the node will be marked NotReady by controller manager by default, right?
If we have to set the NodeNotReady taint I'll do it, but I'd like to change the minimum amount of things in this PR
Haven't reviewed the code but... Ideally we would do the following:
Not sure we can bring this together in time for 1.6. |
@jbeda I think we should do that long-term, but it won't fit into this release. |
If this was previous behavior, I'm ok with reverting to it. admins should drain their node first, but if they don't, the nodecontroller will take care of the node when it stops updating status. |
Not excited about this as it is a regression in functionality but I guess it is the best we can do on short notice. /lgtm |
Feel exactly the same but this is better than erroring in 100 % of the cases. Root cause it (as discussed) that with RBAC a node can't drain itself. |
But thanks for the LGTM! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jbeda, luxas
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@luxas: The following test(s) failed:
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Automatic merge from submit-queue (batch tested with PRs 43018, 42713) |
What this PR does / why we need it:
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #Special notes for your reviewer:
In v1.5,
kubeadm reset
would drain your node and remove it from your cluster if you specified, but now in v1.6 we can't do that due to the RBAC rules we have set up.After conversations with @liggitt, I also agree this functionality was somehow a little mis-placed (though still very convenient to use), so we're removing it for v1.6.
It's the system administrator's duty to drain and remove nodes from the cluster, not the nodes' responsibility.
The current behavior is therefore a bug that needs to be fixed in v1.6
Release note:
@liggitt @deads2k @jbeda @dmmcquay @pires @errordeveloper