-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement the kubeadm upgrade
command
#48899
Implement the kubeadm upgrade
command
#48899
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs more input on the UX side before we go too far.
cmd/kubeadm/app/cmd/upgrade/apply.go
Outdated
|
||
func NewCmdApply() *cobra.Command { | ||
cmd := &cobra.Command{ | ||
Use: "apply", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'm not sold on apply vs. :
> kubectl upgrade
or > kubectl upgrade --dry-run
Per design doc discussion there will be an apply and plan and no dry-run. /cc @jbeda |
467b9dd
to
3a8ee00
Compare
kubeadm upgrade plan
kubeadm upgrade
command
ping @timothysc @andrewrynhard @kubernetes/sig-cluster-lifecycle-feature-requests PTAL if you want/like. This is not the end state, it's work in progress. When I've written all necessary unit tests, added all necessary functionality for a MVP and cleaned up, this is ready for the real review. So feel free to compile yourself and test :) WDYT? |
381c56f
to
68395c5
Compare
This is now ready for review 🎉! Both The UX should/will be greatly improved yet. Showing a snapshot of current state here: UX: $ # --apiserver-bind-port is there to show that configuration is preserved with the upgrade. If it wouldn't, things would break in between when the API server would start listening on the default 6443, but this works.
$ # I've put kubeadm built from this PR in ./kubeadm
$ ./kubeadm init --kubernetes-version v1.7.1 --apiserver-bind-port 443
$ kubectl apply -f https://git.io/weave-kube-1.6
$ ./kubeadm alpha upgrade plan
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
[upgrade] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
[upgrade] Fetching available versions to upgrade to:
[upgrade/versions] Cluster version: v1.7.1
[upgrade/versions] kubeadm version: v1.8.0-alpha.2.789+11f48dc291fe93
[upgrade/versions] Latest stable version: v1.7.3
[upgrade/versions] Latest version in the v1.7 series: v1.7.3
Components that must be upgraded manually after you've upgraded the control plane with `kubeadm upgrade apply`:
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.7.0 v1.7.3
Upgrade to the latest version in the v1.7 series:
COMPONENT CURRENT AVAILABLE
API Server v1.7.1 v1.7.3
Controller Manager v1.7.1 v1.7.3
Scheduler v1.7.1 v1.7.3
Kube Proxy v1.7.1 v1.7.3
Kube DNS 1.14.4 1.14.4
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply --version v1.7.3
_____________________________________________________________________
$ ./kubeadm alpha upgrade apply --version v1.7.3
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
[upgrade] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
[upgrade/version] You have chosen to upgrade to version "v1.7.3"
[upgrade/versions] Cluster version: v1.7.1
[upgrade/versions] kubeadm version: v1.8.0-alpha.2.789+11f48dc291fe93
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: Y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.7.3"...
[upgrade/staticpods] Wrote upgraded Static Pod manifests to "/tmp/kubeadm-upgrade830923296"
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[apiclient] Found 0 Pods for label selector component=kube-scheduler
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[self-hosted] Created TLS secret "ca" from ca.crt and ca.key
[self-hosted] Created TLS secret "apiserver" from apiserver.crt and apiserver.key
[self-hosted] Created TLS secret "apiserver-kubelet-client" from apiserver-kubelet-client.crt and apiserver-kubelet-client.key
[self-hosted] Created TLS secret "sa" from sa.pub and sa.key
[self-hosted] Created TLS secret "front-proxy-ca" from front-proxy-ca.crt and front-proxy-ca.key
[self-hosted] Created TLS secret "front-proxy-client" from front-proxy-client.crt and front-proxy-client.key
[self-hosted] Created secret "scheduler.conf"
[self-hosted] Created secret "controller-manager.conf"
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-apiserver
[self-hosted] self-hosted kube-apiserver ready after 15.295571 seconds
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-controller-manager
[self-hosted] self-hosted kube-controller-manager ready after 15.804586 seconds
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-scheduler
[self-hosted] self-hosted kube-scheduler ready after 18.958074 seconds
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns PTAL and heads up @kubernetes/sig-cluster-lifecycle-pr-reviews @fabriziopandini @andrewrynhard @aaronlevy @ericchiang @mattmoyer @jamiehannaford @pipejakob @justinsb @jnardiello @craigtracey @kad @nckturner @idvoretskyi @calebamiles @jdumars |
Automatic merge from submit-queue (batch tested with PRs 47416, 47408, 49697, 49860, 50162) kubeadm: Replace *clientset.Clientset with clientset.Interface **What this PR does / why we need it**: Needed for #48899 We should always use `clientset.Interface` instead of `*clientset.Clientset`, for better testability and all the other benefits of using an interface. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: Should be straightforward to merge **Release note**: ```release-note NONE ``` @timothysc @dmmcquay @pipejakob
Automatic merge from submit-queue (batch tested with PRs 50254, 50174, 50179) kubeadm: Add back labels for the Static Pod control plane **What this PR does / why we need it**: This Labels section has been removed now for a short period during the v1.8 dev cycle, but I found a good use-case for it; namely filtering Mirror Pods by the `component=kube-*` label when waiting for the self-hosted control plane to come up after an upgrade. It's not _really_ neccessary, but nice to have. Also noticed the lack of coverage for this func, so added a small unit test. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # Dependency for: #48899 **Special notes for your reviewer**: **Release note**: ```release-note NONE ``` @kubernetes/sig-cluster-lifecycle-pr-reviews @dmmcquay @timothysc @mattmoyer
9a08554
to
f9e9135
Compare
Removing the do-not-merge label as this is now ready for merge!! 🎉 PTAL @kubernetes/sig-cluster-lifecycle-pr-reviews |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm due to time.
We need to get some e2e tests in place and talk with the release team on this.
|
||
// PerformStaticPodUpgrade performs the upgrade of the control plane components for a static pod hosted cluster | ||
func PerformStaticPodUpgrade(client clientset.Interface, waiter apiclient.Waiter, internalcfg *kubeadmapi.MasterConfiguration) error { | ||
pathManager, err := upgrade.NewKubeStaticPodPathManagerUsingTempDirs(constants.GetStaticPodDirectory()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NewKubeStaticPods ... PathManagerUsingTempDirs seems unnecessary.
|
||
// PerformStaticPodUpgrade performs the upgrade of the control plane components for a static pod hosted cluster | ||
func PerformStaticPodUpgrade(client clientset.Interface, waiter apiclient.Waiter, internalcfg *kubeadmapi.MasterConfiguration) error { | ||
pathManager, err := upgrade.NewKubeStaticPodPathManagerUsingTempDirs(constants.GetStaticPodDirectory()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now just leave it.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: luxas, timothysc Assign the PR to them by writing No associated issue. Update pull-request body to add a reference to an issue, or get approval with The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
f5a359a
to
91be145
Compare
fixed some golint failures |
91be145
to
e6e8355
Compare
e6e8355
to
c575626
Compare
Rebased this now /retest |
// v1.9.0-alpha.0 -> v1.9.0-alpha.1 -> v1.9.0-alpha.2 | ||
|
||
// Get and output the current latest unstable version | ||
latestVersionStr, latestVersion, err := versionGetterImpl.VersionFromCILabel("latest", "experimental version") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I wouldn't really call "latest" as "experimental". In case of ci/*
builds I agree to call them experimental, but alphas/betas already have some quality tags attached (even were not very high).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are very experimental -- they are just cut from master without any special coordination that feature X, Y, Z must be there for it to ship.
|
||
minorUnstable := latestVersion.Components()[1] | ||
// Get and output the current latest unstable version | ||
previousBranch := fmt.Sprintf("latest-1.%d", minorUnstable-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We wouldn't hit it now, but for future (2.x versions), we probably want to have boundary check here.
Worth to add TODO in comments ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is in the queue already so I won't update it anymore.
I think this is probably the smallest problem we'll have regarding 2.x ;)
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
@luxas: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@luxas Did we implement a solution for the discussion outlined in #48899 (comment)? If not, can we track this as a separate issue for 1.9? |
@jamiehannaford please open a new issue for that |
What this PR does / why we need it:
Implements the kubeadm upgrades proposal: https://docs.google.com/document/d/1PRrC2tvB-p7sotIA5rnHy5WAOGdJJOIXPPv23hUFGrY/edit#
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #fixes: kubernetes/kubeadm#14
Special notes for your reviewer:
I'm gonna split out changes not directly related to the upgrade procedure into separate PRs as dependencies as we go.
Now ready for review. Please look at
cmd/kubeadm/app/phases/upgrade
first and give feedback on that. The rest kind of follows.Release note:
cc @fabriziopandini @kubernetes/sig-cluster-lifecycle-pr-reviews @craigtracey @mattmoyer