-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
With iptables proxy mode, each node can only connect to services where the pod is on the same node. #18259
Comments
If you create two containers on two different nodes, can you ping one container from the other using their container IP address ? Sent from my iPhone
|
Yes, both containers can ping one another and if I use wget on the container/pod IP directly from either node it works as expected. I'm using the ubuntu provider with flannel overlay. If I switch back to the userspace proxy everything works correctly. |
I think an iptables masquerade rule is probably missing somewhere. Can you do sudo iptables-save and paste the output in the issue. Sent from my iPhone
|
Here is the output of iptables-save from node 94.23.121.2:
And here is the output from node 51.255.127.211:
|
I have just tested this with the --masquerade-all=true option and it seems to resolve the connectivity issues. Looking at the difference between the iptables before and after enabling this option it makes sense to me now why this wasn't working: https://www.diffchecker.com/rilupnot |
That flag is for ensuring things like this, but you don't want to use it.
|
This test cluster was brought online with the scripts for ubuntu, which does not seem to set the ip-masq option with flannel. I'll enable it on both nodes and test it out. I assume this flannel option will get set to true once the iptables proxy mode is the default in kube-proxy? |
I have tested this with flannel using --ip-masq=true and kube-proxy using --proxy-mode=iptables --masquerade-all=false which does not seem to work. Here is the output of iptables-save from node 94.23.121.2:
And here is the output from node 51.255.127.211:
|
I think you have a slightly older flannel which is missing a masquerade try running:
If that fails, we might take you up on your offer of access to your On Sun, Dec 6, 2015 at 4:10 PM, Marc Lough notifications@github.com wrote:
|
I've upgraded flannel on both nodes to version 0.5.5 instead of 0.5.3 and can confirm that this works. Running iptables-save now on first node returns:
|
So problem resolved, we just need to update docs to point to 0.5.5? Please On Sun, Dec 6, 2015 at 11:05 PM, Marc Lough notifications@github.com
|
Yes @thockin problem resolved :) thank you! |
For others who face this and reach here. After this, I upgraded to kubernetes 1.12.4. With this version too, same happens if I don't use --proxy-mode=userspace, however, if want to use --proxy-mode=iptables, I have to drop --cluster-cidr flag. So with proxy-mode iptables, |
Hi, I have tested this with v1.1.2 and v1.2.0-alpha.3.
When using --proxy-mode=iptables on the kube-proxy processes, each node can only connect to services where the pod(s) are hosted on the same node.
For example, I have a 2 node test cluster:
I run 2 seperate nginx pods and expose both of them:
Both of these pods are running successfully, one being scheduled to each of the nodes:
And the services registered successfully:
If I SSH into one of the nodes, and attempt to wget both of the nginx service IPs, only one of them will connect and retrieve the index:
If I SSH into the second node, and attempt to wget both service IPs, only one of them will return, however it will be in reverse (e.g. only the pod hosted on that node will connect):
Unsure if I'm doing something wrong or if I need to provide any further information?
I can provide full root access to this cluster for testing if required, as it has just been setup on some throwaway virtual servers.
The text was updated successfully, but these errors were encountered: