Skip to content

kubectl drain keep going when one node failed #97907

Closed
@pandaamanda

Description

What would you like to be added:
kubectl drain on multiple nodes, if one node failed, the command aborting. Without special purpose, maybe it's better to continue processing other nodes.

[root@node24 /]# kubectl drain --selector="kubernetes.io/arch=amd64" --ignore-daemonsets
node/10.229.87.57 cordoned
node/10.229.87.58 cordoned
node/10.229.87.60 cordoned
error: unable to drain node "10.229.87.57", aborting command...

There are pending nodes to be drained:
 10.229.87.57
 10.229.87.58
 10.229.87.60
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/test

Why is this needed:
Follow the pattern of other kubectl subcommand, such as kubectl delete.

Metadata

Assignees

Labels

kind/featureCategorizes issue or PR as related to a new feature.sig/cliCategorizes an issue or PR as relevant to SIG CLI.triage/acceptedIndicates an issue or PR is ready to be actively worked on.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions