Closed
Description
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I drained a node and deleted it, both using kubectl
. The pods started from --pod-manifest-path
on this node are never removed from the list of running pod.
The kube-controller-manager
logs show:
gc_controller.go:62] PodGC is force deleting Pod: kube-system:kube-proxy-host-192-168-0-11
gc_controller.go:161] Forced deletion of orphaned Pod kube-proxy-host-192-168-0-11 succeeded
...
(repeat every 20s indefinitely)
~$ kubectl get pod -o wide --all-namespaces
(notice pod kube-proxy-host-192-168-0-11
)
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-addon-manager-host-192-168-0-10 1/1 Running 0 1h 192.168.0.10 host-192-168-0-10
kube-system kube-apiserver-host-192-168-0-10 1/1 Running 0 1h 192.168.0.10 host-192-168-0-10
kube-system kube-controller-manager-host-192-168-0-10 1/1 Running 0 1h 192.168.0.10 host-192-168-0-10
kube-system kube-dns-798685517-qbkkc 3/3 Running 0 11m 172.17.17.3 host-192-168-0-12
kube-system kube-proxy-host-192-168-0-10 1/1 Running 0 1h 192.168.0.10 host-192-168-0-10
kube-system kube-proxy-host-192-168-0-11 1/1 Running 0 18s 192.168.0.11 host-192-168-0-11
kube-system kube-proxy-host-192-168-0-12 1/1 Running 1 1h 192.168.0.12 host-192-168-0-12
kube-system kube-scheduler-host-192-168-0-10 1/1 Running 0 1h 192.168.0.10 host-192-168-0-10
What you expected to happen:
The local pods are deleted together with the node.
How to reproduce it (as minimally and precisely as possible):
kubectl drain <node>
kubectl delete <node>
Environment:
- Kubernetes version: 1.6.6
- Cloud provider or hardware configuration: OpenStack
- OS: Container Linux by CoreOS 1409.2.0
- Kernel: 4.11.6-coreos
- Install tools: custom
- Others:
related: #38187