-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manually Validate Kubeadm 1.7 -> 1.8 Upgrade Path #466
Comments
/assign jessicaochen |
Testing kubernetes commit a3ab97b7f395e1abd2d954cd9ada0386629d0411 Manual Upgrade: PASS (after working around [Issues 1-3]) [Issues] [2] Kubeadm - #470 [3] Kubeadm - kubernetes/kubernetes#53043 [Details] |
@pipejakob is Issue 2 expected? I think you had some ideas on a similar one recently. |
Oh interesting. On (2), it is expected that stable-1.8 won't resolve at this point (since we haven't released any stable versions for 1.8 yet), but it is unexpected to me that this command would actually need to resolve the version. This might just be some lazy code that always tries to resolve the |
Huh, I am surprised that "stable" was not chosen instead. That would always resolve to the latest stable available. @luxas do you think we could use stable as the default in the future so we do not break tests when preparing for a major release? (I am referencing change kubernetes/kubernetes@81840b3) |
@jessicaochen The idea is that, if you happen to try using an older version of kubeadm, it likely has no idea how to install newer versions of the control plane components (which have different flags). If you run an old 1.6.3 version of kubeadm, and it defaults to just getting the latest |
This does not quite make sense either. If you have an old 1.6.3 version of kubeadm, is it expected to be able to install 1.6.5 if 1.6.5 becomes stable-1.6? |
Yeah, if it's an easy fix, that sounds like a good approach to me. It definitely feels wrong to resolve the flag and not even use it (unless we actually use it for version checking during this phase, but I don't think we do). |
@jessicaochen Both issue 2 & 3 are expected. 3: We deliberately chose to not be able to resolve CI labels for
Yes BTW; we have the upgrade tests up and running now as well: https://k8s-testgrid.appspot.com/sig-cluster-lifecycle#kubeadm-gce-upgrade-1.7-1.8 🎉 @jessicaochen we should try to build the debs as well and update the kubelets in the cluster after the control plane is successfully upgraded with |
There's now an issue for (2): #470 (thanks, @medinatiger!). |
This upgrade path has now been manually validated, and we have green e2es running. |
Automated upgrade testing is tracked by: #402
Meanwhile, I will be tracking manual upgrade testing here so we get some signal for 1.8 release.
The text was updated successfully, but these errors were encountered: