Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the "publicIPs" field should be validated for actual IP address values #4897

Closed
miabbott opened this issue Feb 27, 2015 · 8 comments
Closed
Assignees
Labels
area/api Indicates an issue on api area. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@miabbott
Copy link

Setup

  • RHEL Atomic
  • kubernetes-0.9.0-0.3.git96af0c3.el7.x86_64

Description
When creating a simple service, I tried using a hostname for the publicIPs field as show below:

{
    "apiVersion": "v1beta1",
    "containerPort": 80,
    "id": "frontend",
    "kind": "Service",
    "labels": {
        "name": "frontend"
    },
    "port": 80,
    "publicIPs": [
        "kube-minion1"
    ],
    "selector": {
        "name": "apache"
    }
}

When I fed that service to kubectl, it happily tried to create it.

# kubectl create -f frontend.json 
frontend

However, when the minion tried to start up the service, it was unable to create the necessary iptables rules to start the service:

Feb 27 14:31:34 atomic-00.localdomain systemd[1]: Starting Kubernetes Kube-Proxy Server...
Feb 27 14:31:34 atomic-00.localdomain systemd[1]: Started Kubernetes Kube-Proxy Server.
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.668187    8559 proxier.go:782] Choosing interface ens3 for from-host portals
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.668323    8559 proxier.go:787] Interface ens3 = 192.168.122.178/24
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.668332    8559 proxier.go:326] Initializing iptables
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.669263    8559 iptables.go:186] running iptables -C [PREROUTING -t nat -j KUBE-PROXY]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.671206    8559 iptables.go:186] running iptables -C [OUTPUT -t nat -j KUBE-PROXY]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.672179    8559 iptables.go:186] running iptables -F [KUBE-PROXY -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.673575    8559 iptables.go:186] running iptables -X [KUBE-PROXY -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.675136    8559 iptables.go:186] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.677610    8559 iptables.go:186] running iptables -C [PREROUTING -t nat -j KUBE-PORTALS-CONTAINER]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.678870    8559 iptables.go:186] running iptables -N [KUBE-PORTALS-HOST -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.680732    8559 iptables.go:186] running iptables -C [OUTPUT -t nat -j KUBE-PORTALS-HOST]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.681917    8559 iptables.go:186] running iptables -F [KUBE-PORTALS-CONTAINER -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.682885    8559 iptables.go:186] running iptables -F [KUBE-PORTALS-HOST -t nat]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.684109    8559 proxy.go:89] Using api calls to get config http://kube-master:8080
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691305    8559 roundrobin.go:214] LoadBalancerRR: Setting endpoints for kubernetes to [192.168.122.61:8080]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691379    8559 roundrobin.go:195] Delete endpoint 192.168.122.61:8080 for service: kubernetes
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691393    8559 roundrobin.go:214] LoadBalancerRR: Setting endpoints for kubernetes-ro to [192.168.122.61:7080]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691399    8559 roundrobin.go:195] Delete endpoint 192.168.122.61:7080 for service: kubernetes-ro
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691404    8559 roundrobin.go:214] LoadBalancerRR: Setting endpoints for frontend to [18.0.79.2:80]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.691408    8559 roundrobin.go:195] Delete endpoint 18.0.79.2:80 for service: frontend
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.693986    8559 proxier.go:480] Adding new service "frontend" at 10.254.195.231:80/TCP (local :0)
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.694202    8559 proxier.go:443] Proxying for service "frontend" on TCP port 39694
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.695492    8559 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment frontend -p tcp -m tcp -d 10.254.195.231/32 --dport 80 -j REDIRECT --to-ports 39694]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.697206    8559 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment frontend -p tcp -m tcp -d 10.254.195.231/32 --dport 80 -j REDIRECT --to-ports 39694]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.698675    8559 proxier.go:552] Opened iptables from-containers portal for service "frontend" on TCP 10.254.195.231:80
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.699826    8559 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment frontend -p tcp -m tcp -d 10.254.195.231/32 --dport 80 -j DNAT --to-destination 192.168.122.178:39694]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.702526    8559 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment frontend -p tcp -m tcp -d 10.254.195.231/32 --dport 80 -j DNAT --to-destination 192.168.122.178:39694]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.704000    8559 proxier.go:563] Opened iptables from-host portal for service "frontend" on TCP 10.254.195.231:80
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: I0227 14:31:34.705133    8559 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment frontend -p tcp -m tcp -d <nil>/32 --dport 80 -j REDIRECT --to-ports 39694]
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: E0227 14:31:34.707053    8559 proxier.go:548] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "frontend"
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: E0227 14:31:34.707074    8559 proxier.go:496] Failed to open portal for "frontend": error checking rule: exit status 2: iptables v1.4.21: host/network `<nil>' not found
Feb 27 14:31:34 atomic-00.localdomain kube-proxy[8559]: Try `iptables -h' or 'iptables --help' for more information.

When I changed the value of publicIPs to an actual IP address, the iptables rules on the minions were successfully created.

This leads be to believe there should be some validation of the publicIPs field to make sure that actual IP addresses are provided.

@satnam6502
Copy link
Contributor

Some related discussion and material:

I don't know the answer to your question but this may be useful context. It seems strange to have kube-minion1 as the value for a field that is supposed to represent an IP address.

@miabbott
Copy link
Author

Honestly, it was user error on my part.

I was going through some training material where hostnames or IP addresses could be used in the kubernetes config files. For example:

KUBE_MASTER="--master=http://kube-master:8080"

or

KUBELET_ADDRESSES="--machines=kube-minion1,kube-minion2"

I assumed the same applied to the services JSON and I was wrong. 😞

However, I think this sort of validation may be helpful for newcomers that aren't extremely familiar with kube.

@brendandburns
Copy link
Contributor

Yep, definitely a bug in service validation. Will work on a fix. Thanks
for the report!

Brendan
On Feb 27, 2015 8:24 AM, "Micah Abbott" notifications@github.com wrote:

Honestly, it was user error on my part.

I was going through some training material where hostnames or IP addresses
could be used in the kubernetes config files. For example:

KUBE_MASTER="--master=http://kube-master:8080"

or

KUBELET_ADDRESSES="--machines=kube-minion1,kube-minion2"

I assumed the same applied to the services JSON and I was wrong. [image:
😞]

However, I think this sort of validation may be helpful for newcomers that
aren't extremely familiar with kube.


Reply to this email directly or view it on GitHub
#4897 (comment)
.

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. area/api Indicates an issue on api area. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Feb 28, 2015
@bgrant0607 bgrant0607 added this to the v1.0 milestone Feb 28, 2015
@fgrzadkowski
Copy link
Contributor

Brendan - are you working on this (no assignee)? If not I can fix it.

@fgrzadkowski
Copy link
Contributor

Apparently publicIPs can be a hostname sometimes (e.g. for AWS ELB). See #5228 and #5224 for more details.

Closing this issue.

@fgrzadkowski
Copy link
Contributor

FYI - #5228 added a comment to types.go about expected value of publicIPs, so this should be more obvious what to put there in the future.

@jayunit100
Copy link
Member

Is 0.0.0.0 valid?

@fgrzadkowski
Copy link
Contributor

No it's not. See #5319 and #5508.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

7 participants