Description
What keywords did you search in kubeadm issues before filing this one?
If you have found any duplicates, you should instead reply there and close this page.
If you have not found any duplicates, delete this section and continue on.
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT or FEATURE REQUEST
Versions
kubeadm version:
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
rpm -qa:
kubelet-1.11.0-0.x86_64
kubectl-1.11.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubeadm-1.11.0-0.x86_64
docker images:
k8s.gcr.io/kube-controller-manager-amd64 v1.11.0 55b70b420785 2 weeks ago 155 MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.0 0e4a34a3b0e6 2 weeks ago 56.8 MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.0 214c48e87f58 2 weeks ago 187 MB
k8s.gcr.io/kube-proxy-amd64 v1.11.0 1d3d7afd77d1 2 weeks ago 97.8 MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 7 weeks ago 45.6 MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 3 months ago 219 MB
What happened?
In a three node cluster after restarting the first node (master0) both coredns pods got scheduled on the same node, although the third one would have had enough resources.
NAME READY STATUS RESTARTS AGE IP NODE
coredns-78fcdf6894-frwxk 1/1 Running 1 12h 10.15.224.4 master1
coredns-78fcdf6894-ls7nq 1/1 Running 1 12h 10.15.224.3 master1
What you expected to happen?
kubeadm should generate a coredns deployment with pod anti affinity rules. So the pod distribution looks something like
coredns-78fcdf6894-frwxk 1/1 Running 1 12h 10.15.224.4 master1
coredns-78fcdf6894-ls7nq 1/1 Running 1 12h 10.15.224.3 master2