Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

With iptables proxy mode, each node can only connect to services where the pod is on the same node. #18259

Closed
maclof opened this issue Dec 6, 2015 · 13 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@maclof
Copy link
Contributor

maclof commented Dec 6, 2015

Hi, I have tested this with v1.1.2 and v1.2.0-alpha.3.

When using --proxy-mode=iptables on the kube-proxy processes, each node can only connect to services where the pod(s) are hosted on the same node.

For example, I have a 2 node test cluster:

root@development:~/kubernetes-test-cluster/cluster# kubectl get nodes
NAME             LABELS                                  STATUS    AGE
51.255.127.211   kubernetes.io/hostname=51.255.127.211   Ready     32m
94.23.121.2      kubernetes.io/hostname=94.23.121.2      Ready     39m
root@development:~/kubernetes-test-cluster/cluster# kubectl describe node 94.23.121.2
Name:                   94.23.121.2
Labels:                 kubernetes.io/hostname=94.23.121.2
CreationTimestamp:      Sun, 06 Dec 2015 18:35:44 +0000
Phase:
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ────          ──────  ─────────────────                       ──────────────────                      ──────                          ───────
  Ready         True    Sun, 06 Dec 2015 19:12:21 +0000         Sun, 06 Dec 2015 18:35:44 +0000         KubeletReady                    kubelet is posting ready status
  OutOfDisk     False   Sun, 06 Dec 2015 19:12:21 +0000         Sun, 06 Dec 2015 18:35:44 +0000         KubeletHasSufficientDisk        kubelet has sufficient disk space available
Addresses:      94.23.121.2,94.23.121.2
Capacity:
 cpu:           1
 memory:        4047624Ki
 pods:          40
System Info:
 Machine ID:                    0cb2148a77e3974bb27af24856647592
 System UUID:
 Boot ID:                       da8a4db8-8912-41a6-99db-3338851efe01
 Kernel Version:                3.19.0-25-generic
 OS Image:                      Ubuntu 14.04.3 LTS
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.1.2
 Kube-Proxy Version:            v1.1.2
ExternalID:                     94.23.121.2
Non-terminated Pods:            (1 in total)
  Namespace                     Name                    CPU Requests    CPU Limits      Memory Requests Memory Limits
  ─────────                     ────                    ────────────    ──────────      ─────────────── ─────────────
  default                       nginx-8kxz7             0 (0%)          0 (0%)          0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ────────────  ──────────      ─────────────── ─────────────
  0 (0%)        0 (0%)          0 (0%)          0 (0%)
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Reason                  Message
  ─────────     ────────        ─────   ────                            ─────────────   ──────                  ───────
  37m           37m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  37m           37m             1       {kubelet 94.23.121.2}                           Starting                Starting kubelet.
  37m           37m             1       {kubelet 94.23.121.2}                           NodeReady               Node 94.23.121.2 status is now: NodeReady
  37m           37m             1       {kubelet 94.23.121.2}                           NodeHasSufficientDisk   Node 94.23.121.2 status is now: NodeHasSufficientDisk
  36m           36m             1       {controllermanager }                            RegisteredNode          Node 94.23.121.2 event: Registered Node 94.23.121.2 in NodeController
  21m           21m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  21m           21m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  19m           19m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  19m           19m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  18m           18m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
  18m           18m             1       {kube-proxy 94.23.121.2}                        Starting                Starting kube-proxy.
root@development:~/kubernetes-test-cluster/cluster# kubectl describe node 51.255.127.211
Name:                   51.255.127.211
Labels:                 kubernetes.io/hostname=51.255.127.211
CreationTimestamp:      Sun, 06 Dec 2015 18:43:15 +0000
Phase:
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ────          ──────  ─────────────────                       ──────────────────                      ──────                          ───────
  Ready         True    Sun, 06 Dec 2015 19:15:53 +0000         Sun, 06 Dec 2015 18:43:15 +0000         KubeletReady                    kubelet is posting ready status
  OutOfDisk     False   Sun, 06 Dec 2015 19:15:53 +0000         Sun, 06 Dec 2015 18:43:15 +0000         KubeletHasSufficientDisk        kubelet has sufficient disk space available
Addresses:      51.255.127.211,51.255.127.211
Capacity:
 cpu:           1
 memory:        4047624Ki
 pods:          40
System Info:
 Machine ID:                    42475a7a2193f6002692073f56647596
 System UUID:
 Boot ID:                       a3afc6bd-a8a6-4dd9-9921-e6f8385e7bec
 Kernel Version:                3.19.0-25-generic
 OS Image:                      Ubuntu 14.04.3 LTS
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.1.2
 Kube-Proxy Version:            v1.1.2
ExternalID:                     51.255.127.211
Non-terminated Pods:            (1 in total)
  Namespace                     Name                    CPU Requests    CPU Limits      Memory Requests Memory Limits
  ─────────                     ────                    ────────────    ──────────      ─────────────── ─────────────
  default                       nginx2-zg2h7            0 (0%)          0 (0%)          0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ────────────  ──────────      ─────────────── ─────────────
  0 (0%)        0 (0%)          0 (0%)          0 (0%)
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Reason                  Message
  ─────────     ────────        ─────   ────                            ─────────────   ──────                  ───────
  33m           33m             1       {kube-proxy 51.255.127.211}                     Starting                Starting kube-proxy.
  33m           33m             1       {kubelet 51.255.127.211}                        Starting                Starting kubelet.
  33m           33m             1       {kubelet 51.255.127.211}                        NodeReady               Node 51.255.127.211 status is now: NodeReady
  33m           33m             1       {kubelet 51.255.127.211}                        NodeHasSufficientDisk   Node 51.255.127.211 status is now: NodeHasSufficientDisk
  33m           33m             1       {controllermanager }                            RegisteredNode          Node 51.255.127.211 event: Registered Node 51.255.127.211 in NodeController
  20m           20m             1       {kube-proxy 51.255.127.211}                     Starting                Starting kube-proxy.

I run 2 seperate nginx pods and expose both of them:

kubectl run nginx --image=nginx --replicas=1 --port=80
kubectl expose rc nginx --port=80
kubectl run nginx2 --image=nginx --replicas=1 --port=80
kubectl expose rc nginx2 --port=80

Both of these pods are running successfully, one being scheduled to each of the nodes:

root@development:~/kubernetes-test-cluster/cluster# kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
nginx-8kxz7    1/1       Running   0          20m
nginx2-zg2h7   1/1       Running   0          13m

And the services registered successfully:

root@development:~/kubernetes-test-cluster/cluster# kubectl get svc
NAME         CLUSTER_IP       EXTERNAL_IP   PORT(S)   SELECTOR     AGE
kubernetes   172.16.0.1       <none>        443/TCP   <none>       43m
nginx        172.16.46.239    <none>        80/TCP    run=nginx    17m
nginx2       172.19.216.162   <none>        80/TCP    run=nginx2   14m

If I SSH into one of the nodes, and attempt to wget both of the nginx service IPs, only one of them will connect and retrieve the index:

root@master:~# wget 172.16.46.239
--2015-12-06 20:01:55--  http://172.16.46.239/
Connecting to 172.16.46.239:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html.1’

100%[====================================>] 612         --.-K/s   in 0s

2015-12-06 20:01:55 (64.8 MB/s) - ‘index.html.1’ saved [612/612]
root@master:~# wget 172.19.216.162
--2015-12-06 20:09:53--  http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:12:01--  (try: 2)  http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:14:10--  (try: 3)  http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:16:21--  (try: 4)  http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:18:32--  (try: 5)  http://172.19.216.162/
Connecting to 172.19.216.162:80... clear

If I SSH into the second node, and attempt to wget both service IPs, only one of them will return, however it will be in reverse (e.g. only the pod hosted on that node will connect):

root@node1:~# wget 172.19.216.162
--2015-12-06 20:10:00--  http://172.19.216.162/
Connecting to 172.19.216.162:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html.1’

100%[======================================>] 612         --.-K/s   in 0s

2015-12-06 20:10:00 (60.0 MB/s) - ‘index.html.1’ saved [612/612]

root@node1:~# wget 172.16.46.239
--2015-12-06 20:10:05--  http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:12:14--  (try: 2)  http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:14:23--  (try: 3)  http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:16:33--  (try: 4)  http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:18:44--  (try: 5)  http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

Unsure if I'm doing something wrong or if I need to provide any further information?

I can provide full root access to this cluster for testing if required, as it has just been setup on some throwaway virtual servers.

@mikedanese mikedanese added kind/support Categorizes issue or PR as a support question. team/cluster labels Dec 6, 2015
@ArtfulCoder
Copy link
Contributor

If you create two containers on two different nodes, can you ping one container from the other using their container IP address ?

Sent from my iPhone

On Dec 6, 2015, at 11:23 AM, Marc Lough notifications@github.com wrote:

Hi, I have tested this with v1.1.2 and v1.2.0-alpha.3.

When using --proxy-mode=iptables on the kube-proxy processes, each node can only connect to services where the pod(s) are hosted on the same node.

For example, I have a 2 node test cluster:

root@development:/kubernetes-test-cluster/cluster# kubectl get nodes
NAME LABELS STATUS AGE
51.255.127.211 kubernetes.io/hostname=51.255.127.211 Ready 32m
94.23.121.2 kubernetes.io/hostname=94.23.121.2 Ready 39m
root@development:
/kubernetes-test-cluster/cluster# kubectl describe node 94.23.121.2
Name: 94.23.121.2
Labels: kubernetes.io/hostname=94.23.121.2
CreationTimestamp: Sun, 06 Dec 2015 18:35:44 +0000
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
Ready True Sun, 06 Dec 2015 19:12:21 +0000 Sun, 06 Dec 2015 18:35:44 +0000 KubeletReady kubelet is posting ready status
OutOfDisk False Sun, 06 Dec 2015 19:12:21 +0000 Sun, 06 Dec 2015 18:35:44 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
Addresses: 94.23.121.2,94.23.121.2
Capacity:
cpu: 1
memory: 4047624Ki
pods: 40
System Info:
Machine ID: 0cb2148a77e3974bb27af24856647592
System UUID:
Boot ID: da8a4db8-8912-41a6-99db-3338851efe01
Kernel Version: 3.19.0-25-generic
OS Image: Ubuntu 14.04.3 LTS
Container Runtime Version: docker://1.9.1
Kubelet Version: v1.1.2
Kube-Proxy Version: v1.1.2
ExternalID: 94.23.121.2
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
default nginx-8kxz7 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
37m 37m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
37m 37m 1 {kubelet 94.23.121.2} Starting Starting kubelet.
37m 37m 1 {kubelet 94.23.121.2} NodeReady Node 94.23.121.2 status is now: NodeReady
37m 37m 1 {kubelet 94.23.121.2} NodeHasSufficientDisk Node 94.23.121.2 status is now: NodeHasSufficientDisk
36m 36m 1 {controllermanager } RegisteredNode Node 94.23.121.2 event: Registered Node 94.23.121.2 in NodeController
21m 21m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
21m 21m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
19m 19m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
19m 19m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
18m 18m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
18m 18m 1 {kube-proxy 94.23.121.2} Starting Starting kube-proxy.
root@development:~/kubernetes-test-cluster/cluster# kubectl describe node 51.255.127.211
Name: 51.255.127.211
Labels: kubernetes.io/hostname=51.255.127.211
CreationTimestamp: Sun, 06 Dec 2015 18:43:15 +0000
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
Ready True Sun, 06 Dec 2015 19:15:53 +0000 Sun, 06 Dec 2015 18:43:15 +0000 KubeletReady kubelet is posting ready status
OutOfDisk False Sun, 06 Dec 2015 19:15:53 +0000 Sun, 06 Dec 2015 18:43:15 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
Addresses: 51.255.127.211,51.255.127.211
Capacity:
cpu: 1
memory: 4047624Ki
pods: 40
System Info:
Machine ID: 42475a7a2193f6002692073f56647596
System UUID:
Boot ID: a3afc6bd-a8a6-4dd9-9921-e6f8385e7bec
Kernel Version: 3.19.0-25-generic
OS Image: Ubuntu 14.04.3 LTS
Container Runtime Version: docker://1.9.1
Kubelet Version: v1.1.2
Kube-Proxy Version: v1.1.2
ExternalID: 51.255.127.211
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
default nginx2-zg2h7 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
33m 33m 1 {kube-proxy 51.255.127.211} Starting Starting kube-proxy.
33m 33m 1 {kubelet 51.255.127.211} Starting Starting kubelet.
33m 33m 1 {kubelet 51.255.127.211} NodeReady Node 51.255.127.211 status is now: NodeReady
33m 33m 1 {kubelet 51.255.127.211} NodeHasSufficientDisk Node 51.255.127.211 status is now: NodeHasSufficientDisk
33m 33m 1 {controllermanager } RegisteredNode Node 51.255.127.211 event: Registered Node 51.255.127.211 in NodeController
20m 20m 1 {kube-proxy 51.255.127.211} Starting Starting kube-proxy.
I run 2 seperate nginx pods and expose both of them:

kubectl run nginx --image=nginx --replicas=1 --port=80
kubectl expose rc nginx --port=80
kubectl run nginx2 --image=nginx --replicas=1 --port=80
kubectl expose rc nginx2 --port=80
Both of these pods are running successfully, one being scheduled to each of the nodes:

root@development:~/kubernetes-test-cluster/cluster# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-8kxz7 1/1 Running 0 20m
nginx2-zg2h7 1/1 Running 0 13m
And the services registered successfully:

root@development:~/kubernetes-test-cluster/cluster# kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 172.16.0.1 443/TCP 43m
nginx 172.16.46.239 80/TCP run=nginx 17m
nginx2 172.19.216.162 80/TCP run=nginx2 14m
If I SSH into one of the nodes, and attempt to wget both of the nginx service IPs, only one of them will connect and retrieve the index:

root@master:~# wget 172.16.46.239
--2015-12-06 20:01:55-- http://172.16.46.239/
Connecting to 172.16.46.239:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html.1’

100%[====================================>] 612 --.-K/s in 0s

2015-12-06 20:01:55 (64.8 MB/s) - ‘index.html.1’ saved [612/612]
root@master:~# wget 172.19.216.162
--2015-12-06 20:09:53-- http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:12:01-- (try: 2) http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:14:10-- (try: 3) http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:16:21-- (try: 4) http://172.19.216.162/
Connecting to 172.19.216.162:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:18:32-- (try: 5) http://172.19.216.162/
Connecting to 172.19.216.162:80... clear
If I SSH into the second node, and attempt to wget both service IPs, only one of them will return, however it will be in reverse (e.g. only the pod hosted on that node will connect):

root@node1:~# wget 172.19.216.162
--2015-12-06 20:10:00-- http://172.19.216.162/
Connecting to 172.19.216.162:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html.1’

100%[======================================>] 612 --.-K/s in 0s

2015-12-06 20:10:00 (60.0 MB/s) - ‘index.html.1’ saved [612/612]

root@node1:~# wget 172.16.46.239
--2015-12-06 20:10:05-- http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:12:14-- (try: 2) http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:14:23-- (try: 3) http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:16:33-- (try: 4) http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.

--2015-12-06 20:18:44-- (try: 5) http://172.16.46.239/
Connecting to 172.16.46.239:80... failed: Connection timed out.
Retrying.
Unsure if I'm doing something wrong or if I need to provide any further information?

I can provide full root access to this cluster for testing if required, as it has just been setup on some throwaway virtual servers.


Reply to this email directly or view it on GitHub.

@maclof
Copy link
Contributor Author

maclof commented Dec 6, 2015

Yes, both containers can ping one another and if I use wget on the container/pod IP directly from either node it works as expected.

I'm using the ubuntu provider with flannel overlay.

If I switch back to the userspace proxy everything works correctly.

@ArtfulCoder
Copy link
Contributor

I think an iptables masquerade rule is probably missing somewhere.

Can you do sudo iptables-save and paste the output in the issue.

Sent from my iPhone

On Dec 6, 2015, at 3:06 PM, Marc Lough notifications@github.com wrote:

Yes, both containers can ping one another and if I use wget on the container/pod IP directly from either node it works as expected.

I'm using the ubuntu provider with flannel overlay.

If I switch back to the userspace proxy everything works correctly.


Reply to this email directly or view it on GitHub.

@maclof
Copy link
Contributor Author

maclof commented Dec 6, 2015

Here is the output of iptables-save from node 94.23.121.2:

root@master:~# iptables-save
# Generated by iptables-save v1.4.21 on Mon Dec  7 00:13:56 2015
*nat
:PREROUTING ACCEPT [4:184]
:INPUT ACCEPT [4:184]
:OUTPUT ACCEPT [4:240]
:POSTROUTING ACCEPT [4:240]
:DOCKER - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.14.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT
# Completed on Mon Dec  7 00:13:56 2015
# Generated by iptables-save v1.4.21 on Mon Dec  7 00:13:56 2015
*filter
:INPUT ACCEPT [16545:3656888]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [16345:3762230]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT
# Completed on Mon Dec  7 00:13:56 2015

And here is the output from node 51.255.127.211:

root@node1:~# iptables-save
# Generated by iptables-save v1.4.21 on Mon Dec  7 00:15:26 2015
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.37.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT
# Completed on Mon Dec  7 00:15:26 2015
# Generated by iptables-save v1.4.21 on Mon Dec  7 00:15:26 2015
*filter
:INPUT ACCEPT [1047:249205]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1126:139088]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT
# Completed on Mon Dec  7 00:15:26 2015

@maclof
Copy link
Contributor Author

maclof commented Dec 6, 2015

I have just tested this with the --masquerade-all=true option and it seems to resolve the connectivity issues. Looking at the difference between the iptables before and after enabling this option it makes sense to me now why this wasn't working: https://www.diffchecker.com/rilupnot

@thockin
Copy link
Member

thockin commented Dec 6, 2015

That flag is for ensuring things like this, but you don't want to use it.
Are you running flannel with the --ip-masq flag?
On Dec 6, 2015 3:23 PM, "Marc Lough" notifications@github.com wrote:

I have just tested this with the --masquerade-all=true option and it seems
to resolve the connectivity issues.

I assume this is required for what I'm trying to do?


Reply to this email directly or view it on GitHub
#18259 (comment)
.

@maclof
Copy link
Contributor Author

maclof commented Dec 7, 2015

This test cluster was brought online with the scripts for ubuntu, which does not seem to set the ip-masq option with flannel. I'll enable it on both nodes and test it out.

I assume this flannel option will get set to true once the iptables proxy mode is the default in kube-proxy?

@maclof
Copy link
Contributor Author

maclof commented Dec 7, 2015

I have tested this with flannel using --ip-masq=true and kube-proxy using --proxy-mode=iptables --masquerade-all=false which does not seem to work.

Here is the output of iptables-save from node 94.23.121.2:

# Generated by iptables-save v1.4.21 on Mon Dec  7 01:08:47 2015
*nat
:PREROUTING ACCEPT [5:216]
:INPUT ACCEPT [5:216]
:OUTPUT ACCEPT [11:660]
:POSTROUTING ACCEPT [11:660]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.14.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT
# Completed on Mon Dec  7 01:08:47 2015
# Generated by iptables-save v1.4.21 on Mon Dec  7 01:08:47 2015
*filter
:INPUT ACCEPT [26133:5051720]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25971:5194090]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT
# Completed on Mon Dec  7 01:08:47 2015

And here is the output from node 51.255.127.211:

# Generated by iptables-save v1.4.21 on Mon Dec  7 01:09:44 2015
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.37.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT
# Completed on Mon Dec  7 01:09:44 2015
# Generated by iptables-save v1.4.21 on Mon Dec  7 01:09:44 2015
*filter
:INPUT ACCEPT [1044:282949]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1124:154539]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT
# Completed on Mon Dec  7 01:09:44 2015

@thockin
Copy link
Member

thockin commented Dec 7, 2015

I think you have a slightly older flannel which is missing a masquerade
rule.

try running:

iptables -t nat -A POSTROUTING -d 172.20.0.0/14 -j MASQUERADE

If that fails, we might take you up on your offer of access to your
machines.

On Sun, Dec 6, 2015 at 4:10 PM, Marc Lough notifications@github.com wrote:

I have tested this with flannel using --ip-masq=true and kube-proxy using
--proxy-mode=iptables --masquerade-all=false which does not seem to work.

Here is the output of iptables-save from node 94.23.121.2:

Generated by iptables-save v1.4.21 on Mon Dec 7 01:08:47 2015

*nat
:PREROUTING ACCEPT [5:216]
:INPUT ACCEPT [5:216]
:OUTPUT ACCEPT [11:660]
:POSTROUTING ACCEPT [11:660]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.14.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT

Completed on Mon Dec 7 01:08:47 2015

Generated by iptables-save v1.4.21 on Mon Dec 7 01:08:47 2015

*filter
:INPUT ACCEPT [26133:5051720]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25971:5194090]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT

Completed on Mon Dec 7 01:08:47 2015

And here is the output from node 51.255.127.211:

Generated by iptables-save v1.4.21 on Mon Dec 7 01:09:44 2015

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.37.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT

Completed on Mon Dec 7 01:09:44 2015

Generated by iptables-save v1.4.21 on Mon Dec 7 01:09:44 2015

*filter
:INPUT ACCEPT [1044:282949]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1124:154539]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT

Completed on Mon Dec 7 01:09:44 2015


Reply to this email directly or view it on GitHub
#18259 (comment)
.

@maclof
Copy link
Contributor Author

maclof commented Dec 7, 2015

I've upgraded flannel on both nodes to version 0.5.5 instead of 0.5.3 and can confirm that this works. Running iptables-save now on first node returns:

# Generated by iptables-save v1.4.21 on Mon Dec  7 08:01:41 2015
*nat
:PREROUTING ACCEPT [6:192]
:INPUT ACCEPT [6:192]
:OUTPUT ACCEPT [22:1348]
:POSTROUTING ACCEPT [22:1348]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.14.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A POSTROUTING ! -s 172.20.0.0/14 -d 172.20.0.0/14 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT
# Completed on Mon Dec  7 08:01:41 2015
# Generated by iptables-save v1.4.21 on Mon Dec  7 08:01:41 2015
*filter
:INPUT ACCEPT [5785:1939229]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5745:1986189]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT
# Completed on Mon Dec  7 08:01:41 2015

@thockin
Copy link
Member

thockin commented Dec 7, 2015

So problem resolved, we just need to update docs to point to 0.5.5? Please
confirm

On Sun, Dec 6, 2015 at 11:05 PM, Marc Lough notifications@github.com
wrote:

I've upgraded flannel on both nodes to version 0.5.5 instead of 0.5.3 and
can confirm that this works. Running iptables-save now on first node
returns:

Generated by iptables-save v1.4.21 on Mon Dec 7 08:01:41 2015

*nat
:PREROUTING ACCEPT [6:192]
:INPUT ACCEPT [6:192]
:OUTPUT ACCEPT [22:1348]
:POSTROUTING ACCEPT [22:1348]
:DOCKER - [0:0]
:FLANNEL - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SEP-CT24JF4JAT2YBPCZ - [0:0]
:KUBE-SEP-VMOA3QYF6HZH36PJ - [0:0]
:KUBE-SEP-X4TFXMB5IM6ONOT5 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-6N4SJQIF3IX3FORG - [0:0]
:KUBE-SVC-KN7BHMGRB3FSVEMI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.14.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/14 -j FLANNEL
-A POSTROUTING ! -s 172.20.0.0/14 -d 172.20.0.0/14 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4d415351 -j MASQUERADE
-A FLANNEL -d 172.20.0.0/14 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
-A KUBE-SEP-CT24JF4JAT2YBPCZ -s 172.20.37.2/32 -m comment --comment "default/nginx2:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-CT24JF4JAT2YBPCZ -p tcp -m comment --comment "default/nginx2:" -m tcp -j DNAT --to-destination 172.20.37.2:80
-A KUBE-SEP-VMOA3QYF6HZH36PJ -s 172.20.14.2/32 -m comment --comment "default/nginx:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-VMOA3QYF6HZH36PJ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 172.20.14.2:80
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -s 94.23.121.2/32 -m comment --comment "default/kubernetes:" -j MARK --set-xmark 0x4d415351/0xffffffff
-A KUBE-SEP-X4TFXMB5IM6ONOT5 -p tcp -m comment --comment "default/kubernetes:" -m tcp -j DNAT --to-destination 94.23.121.2:6443
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes: cluster IP" -m tcp --dport 443 -j KUBE-SVC-6N4SJQIF3IX3FORG
-A KUBE-SERVICES -d 172.16.46.239/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 172.19.216.162/32 -p tcp -m comment --comment "default/nginx2: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KN7BHMGRB3FSVEMI
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-VMOA3QYF6HZH36PJ
-A KUBE-SVC-6N4SJQIF3IX3FORG -m comment --comment "default/kubernetes:" -j KUBE-SEP-X4TFXMB5IM6ONOT5
-A KUBE-SVC-KN7BHMGRB3FSVEMI -m comment --comment "default/nginx2:" -j KUBE-SEP-CT24JF4JAT2YBPCZ
COMMIT

Completed on Mon Dec 7 08:01:41 2015

Generated by iptables-save v1.4.21 on Mon Dec 7 08:01:41 2015

*filter
:INPUT ACCEPT [5785:1939229]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5745:1986189]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
COMMIT

Completed on Mon Dec 7 08:01:41 2015


Reply to this email directly or view it on GitHub
#18259 (comment)
.

@maclof
Copy link
Contributor Author

maclof commented Dec 7, 2015

Yes @thockin problem resolved :) thank you!
I think docs & cluster scripts need to point to 0.5.5

@diwakar-s-maurya
Copy link

For others who face this and reach here.
I faced the same problem recently. (kubernetes 1.11.0, flannel 0.7.1, docker 1.13.1). Switching to userspace proxy in kube-proxy solved the problem.
--bind-address=0.0.0.0 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --hostname-override=node1 --cluster-cidr=10.254.0.0/16 --proxy-mode=userspace

After this, I upgraded to kubernetes 1.12.4. With this version too, same happens if I don't use --proxy-mode=userspace, however, if want to use --proxy-mode=iptables, I have to drop --cluster-cidr flag. So with proxy-mode iptables,
KUBE_PROXY_ARGS="--bind-address=0.0.0.0 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --hostname-override=node1 --proxy-mode=iptables"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants