-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s-1.10: One of the kube-proxy pod failed to get up after restart #63064
Comments
/sig storage |
@kubernetes/sig-storage-bugs |
I am not sure it's related to mount propagation at all. It would produce different messages.
This seems to be some issue with Secret volumes. Please check |
Yes this doesn't seem to be related to mount propagation, but something new in 1.10 compare to 1.9 that is breaking it. Checked below path and it has all the required data
Mount propagation is "private,slave"
After changing mount propagation to shared and docker restart it started working
There is something really wrong in 1.10 as same scenario is working fine with k8s 1.9.1 |
You should retry with #62633 (upcoming 1.10.3?) where we change the default back to private and use slave/shared only when explicitly requested. |
Sure will retry with 1.10.3 (once available) and update. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Deployed k8s-1.10 using kubeadm (60 nodes setup) and everything was up & running.
Restarted one of the kube-proxy pod and then it failed to come up
Pod was up & running but after restart if went into error state
$ kubectl get pod --namespace=kube-system -o wide |grep kube-proxy-pj4xw
kube-proxy-pj4xw 0/1 CrashLoopBackOff 8 17m 1.0.0.76 minion-30-5-0-5
Below is the error:-
$ kubectl logs --namespace=kube-system kube-proxy-pj4xw
I0424 06:24:47.961665 1 feature_gate.go:226] feature gates: &{{} map[]}
error: unable to read certificate-authority /var/run/secrets/kubernetes.io/serviceaccount/ca.crt for default due to open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
What you expected to happen:
Pod should up and running after restart
How to reproduce it (as minimally and precisely as possible):
delete one of kube-proxy pod
Anything else we need to know?:
Not seen this issue with k8s-1.9.1
Disabled feature MountPropagation through featureGates as with MountPropagation enable it was not working as well so gave a try with disable
Also added "MountFlags=shared" to /etc/systemd/system/multi-user.target.wants/docker.service as without this all test-pod deployment were failing
After runing command "mount --make-rshared /" and restart docker service it started to work but next time again after delete pod, it failed to come up
Environment:
kubectl version
):Cloud provider or hardware configuration:
Deployed using kubeadm (60 nodes setup)
OS (e.g. from /etc/os-release):
uname -a
):##########################
docker version
##########################
kubectl describe pod --namespace=kube-system kube-proxy-pj4xw
The text was updated successfully, but these errors were encountered: