-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling node update #5511
Comments
Oops, @pravisankar instead |
cc @rvangent |
@mbforbes you are interested in node upgrades. |
And me. I need to figure out how this would fit in with #5472, though, because I had intended rollingupdate to fall in under there eventually (with client calls). |
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
This isn't going to be done in kubernetes/kubernetes. |
Capturing conversation with @bgrant0607 just now.
New feature for kubectl:
when you run
kubectl rollingnodeupdate --exec myprogram --success_label=k3=v3 --fail_label=k4-v4 --selector k1=v1,k2=v2
, then it would do this:This cleanly separates the problem of taking a node in and out (which kubectl should know about) from what you might do to a node once it is removed. Things you might do when it is removed include making salt/chef/puppet update the node, reboot the node, delete the node, power it down and wait for a human to touch it, etc.
The clean removal steps are documented in docs/cluster_management.md under
Maintenance on a node
.The text was updated successfully, but these errors were encountered: