-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop 'kubectl drain' deleting pods with local storage. #26667
Conversation
|
k8s-bot test this please: issue #20576 |
Well, at least this one's more time efficient.
|
2 similar comments
if !o.IgnoreDaemonsets { | ||
return false, nil, &fatal{kDaemonsetFatal} | ||
} | ||
return false, &warning{kDaemonsetWarning}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. No matter what, we never delete the daemonset pods because the DS controller doesn't respect unschedulable. The flag only toggles whether we consider their presence an error or not.
I will add a comment that clarifies this.
Sorry for the delay in reviewing this. I like the simplification. I just had a few comments. |
|
||
// getPodsForDeletion returns all the pods we're going to delete. If there are | ||
// any unmanaged pods and the user didn't pass --force, we return that list in | ||
// an error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: update this comment.
@@ -33,6 +33,10 @@ will make the node schedulable again. | |||
|
|||
.SH OPTIONS | |||
.PP | |||
\fB\-\-delete\-local\-data\fP=false | |||
Continue even if there are pods with local data (EmptyDir). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this should just say "local data" since HostDir is also local data. Maybe say "Continue even if there are pods using emptyDir (local data that will be deleted when the node is drained)."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I think the flag name is fine, I'm just referring to the way you described it (here and elsewhere in the PR).
@davidopp fixed docs PTAL |
LGTM Please squash the commits and then apply LGTM label. |
@mml |
@spxtr any idea what's going on with the bot? It's talking nonsense about a re-test, and then it took the lgtm label off without saying why. |
Unless forced with --delete-local-data. Also a refactoring of the kubectl drain logic that selects/rejects pods and produces error/warning messages.
No idea. Note that if you ask for a retest without adding the issue: it will delete your comment. |
GCE e2e build/test passed for commit d09af4a. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit d09af4a. |
Automatic merge from submit-queue |
|
||
func hasLocalStorage(pod api.Pod) bool { | ||
for _, volume := range pod.Spec.Volumes { | ||
if volume.EmptyDir != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about hostPath
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Matt and I discussed this and eventually decided to leave it out. We don't know what people are going to do after the drain. They might be doing something that doesn't delete all the data (for example, upgrade OS and reboot). Arguably we should have yet another flag to control whether to consider hostPath, but for simplicity we decided to just block on the case we know for sure will be deleted, namely emptyDir.
For cluster scale-down, the node goes away "forever" so it makes sense to block on hostPath. But for kubectl drain
I think it's not necessary.
Kubectl drain will not continue if there are pods with local storage unless
forced with --delete-local-data.
Fixes #23972