-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return missing ClusterID error instead of ignoring it #60125
Conversation
/ok-to-test |
/assign @justinsb |
If there is a case where kubelet is running on AWS and not having instance tags is expected and ok, this change will break that use case. |
/retest |
reminder: we are now in Code Slush. This PR needs priority/ kind/ and status/approved-for-milestone labels in order to merge into 1.10. Thanks! |
/test pull-kubernetes-bazel-test |
We need e2e with spot instances .... |
There is a use case for this? @justinsb added the comment around the time of adding support for shared tags, so I am not certain what this change would break. |
/lgtm I think the problems we're seeing with spot instances outweigh the risks to people with poorly configured clusters - better to force tagging than to limp along. |
In practice these were in most cases required to exist, but kubelet did not previously enforce this. It now does, so these tests need to change a bit.
This fixes issue #57382.
/test pull-kubernetes-bazel-test |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justinsb, vainu-arto The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatic merge from submit-queue (batch tested with PRs 60435, 60334, 60458, 59301, 60125). If you want to cherry-pick this change to another branch, please follow the instructions here. |
Would we be able to cherry-pick this bug fix into 1.8/1.9? We had 2 clusters that were severely affected by this issue (#57382) when we had upgraded to 1.8. (e.g. any delay in AWS tagging the spot instance would cause somewhat silently failing kubelet init) |
Is there an upgrade path for Kubernetes clusters running on AWS that do not set the KubernetesCluster or the modern |
@rphillips not sure I understand - you have to set one of the tags, I'm afraid. Is that a problem for your use case? |
Tags were never set on older versions of bootkube. I have fixed that going forward, but I suspect if other users of kubernetes had never set any tags, then upgrades are going to fail. I didn't see a warning in the release notes regarding this. |
The 1.5->1.6 upgrade deletes your nodes when they reboot or kubelet is restarted. No mention in the CHANGELOG about requiring a tag - prior to 1.6 it wasn't necessary, at least unofficially. I made #50915 for it a while ago. |
This fixes issue #57382. In the cases I'm aware of kubelet cannot function if it can't detect the cluster it is running in, so the error should be passed up to the caller preventing initialization when kubelet would fail. This way the error can be detected and kubelet startup attempted again later (giving AWS time to apply the tags).