-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NFS example: unable to mount the service #24249
Comments
Same behavior with iptables mode. Further info.
|
When I tested this a few days ago, if I waited long enough (5 to 10 minutes), the timeout error eventually went away and everything was good. Do you see the same behavior? cc @kubernetes/sig-network @kubernetes/rh-networking |
Env: AWS + Kuberentes 1.2.0: I probably found the bug/problem for this. I had same issues with redis-cluster bootstrap. IMO it has smth to do with the iptable rules. If I create an RC and throw in a few more instances of the same pod everything works fine. I saw that the load balancer was implemented with iptables as well. Here the comparisons: Single POD + Service DOES NOT WORK:
Multiple PODS + Service WORKS:
Single POD promoted with RC to multiple PODs WORKS:
Actually, SVC even does not redirect packets to the last POD. Even RC with one instance makes the service hang. |
You can't access yourself through your Service VIP in iptables kube-proxy (i.e 1 endpoint Svc, kubectl exec into endpoint, curl svc ip won't work) without either hairpin mode on all your veth's (for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done) or a promiscuous mode cbr0 (netstat -i). |
The default cluster setup should give you promiscuous mode. We had a bug in between where we weren't defaulting to old hairpin behavior, but that should be fixed with: #23325. What hairpin-mode is you kubelet operating with (it's a flag you can find via ps | grep)? |
@bprashanth for my test, I had:
Are you saying that should not work by default? |
Are you using nfsv3 or nfsv4? |
As long as the client pod is talking to the Service VIP that's talking to the nfs server pod (and these 2 are different pods), hairpin mode doesn't matter. It only matters if you're talking to a service that loadbalances back to the same pod. I couldn't make out if that was what @ovanes was asking about. All setups should work by default. There was a window (before #23325, after 1.2) where one configuration didn't work (i.e if you're running with --configure-cbr0=false but still using a linux bridge), for the specific hairpin mode case. |
@bprashanth yes, that's what I have (client pod -> service VIP -> nfs server pod). The first time it attempted to mount the PV into the client pod, it failed (connection timed out). I then waited several minutes and it magically cleared up and was able to connect and mount. |
There's noting in the iptables dumps that indicates that there's a bug - it all looks exactly as I expect it to. http://docs.k8s.io/user-guide/debugging-services might apply, though I r ealize it needs an update for iptables. |
@bprashanth Indeed, talking to the service from a non-service related container distributes load to all containers. Thanks for clarification. |
Im having the same issue, after being forced to delete all the nodes (would not unmount EBS) when nodes were recreated with labels, and since then it fails for me too on connection timeout the pod itself listen to 2049 and 111 telnet from inside the nfs-server pod to the service works, but from busybox it doesn't
|
the nfs image is messed up - bring up your own nfs-server and it works fine if you need to tweak the mount points other than exports look at any entrypoint for any nfs-server Dockerfile:
run.sh
nfs-kernel-server:
UPDATE: 5/22/2017in order to have NFS successfully mount via a service you need to make sure all its ports are fixes and not dynamically assigned. check what ports are published by rpc by connecting to the running NFS server pod:
nfs_server_service.yaml
check that you can mount to the the pod directly to unmount a disconnected / dead pod volume use then check that the service is mounting:
|
Hi NFS users, the support of NFS on GCI image is not available on release 1.4.7 and 1.5. Please give it a try and let us know if you have any problem. Thanks! |
I have tried a couple of nfs-server images from docker hub, I've built and image based on what @innovia supplied above. I've used Deployment, tried 1 and many replicas. In all cases, I can access the nfs servers when using pods ip-s, but not service ip. Running k8s 1.5.3 on bare-metal ubuntu setup with kubeadm. |
@groundnuty could you provide more details about your setup? Also could you try to use service name? I have a simple setup which uses service name and it works for me.
|
@jingxu97 how is that at all related? You're talking about GlusterFS, not NFS. |
the way to mount NFS and GLusterFS is very similar. It would be very helpful if you could provide more details about your set up so that I can have a try to see the problem. Thanks! |
@jingxu97 I was not able to test glusterfs (need to install some packages on nodes to make it work), but after modifying your example for NFS it seems to be working well. As some people here tested this against single pod and multiple-pod setups, I provide single and multiple files for testing. To be fair I don't know why it started to work. My own usecase also works now. The only thing I did was to install on all nodes (and master): glusterfs-client and glusterfs-common. All nodes are running Ubuntu 16.04. UPDATE: Uninstalled both packages from all nodes, still working. IMPORTANT: Please modify nfs volume server field in the client pods so that it matches the service. Single pod test:
Multiple pods test:
|
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
i found a new way to solve this problem ,you can set nfs-server port to be fixed ,then mount nfs-server by service . you can refer to https://wiki.debian.org/SecuringNFS. |
Hi guys,
I just noticed that i'm not able to mount the nfs service as described in the nfs example.
Mounting using the pod's ip works fine:
But when i use the service it doesn't work
There are no error logs in the kube-proxy logs:
The forwarding rule seems to be configured:
This has been reproduced with flannel and calico on GCE and baremetal.
I'm going to try with the proxy-mode = iptables, though i don't know if that will change anything.
Do you have any idea ?
The text was updated successfully, but these errors were encountered: