Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Network Policy #22469

Closed
6 of 8 tasks
caseydavenport opened this issue Mar 3, 2016 · 61 comments
Closed
6 of 8 tasks

Kubernetes Network Policy #22469

caseydavenport opened this issue Mar 3, 2016 · 61 comments
Labels
sig/network Categorizes an issue or PR as relevant to SIG Network. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.

Comments

@caseydavenport
Copy link
Member

caseydavenport commented Mar 3, 2016

Raising this issue to track the status of network policy in Kubernetes so it isn't hidden within the network SIG.

Current rough plan:

  • Short-term: Define v1alpha1 network policy using API extensions.
  • Medium-term [v1.3]: Implement as Kubernetes v1beta API object. (Add NetworkPolicy API Resource #25638)
  • Medium-term: Refine and add to API based on user-feedback.
  • Long-term: Implement as first-class API in k8s.

v1beta API is outlined here: https://github.com/kubernetes/kubernetes/pull/24154/files

and here: https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit

Some future considerations and discussion points:

  • Egress (outgoing) network policy
  • External policy (from outside the cluster)
  • How does policy fit in to multi-tenancy model?
  • Further policing cross-namespace connectivity (e.g allow from these labels in these namespaces).
@tomdee
Copy link

tomdee commented Mar 3, 2016

CC @philips

@caseydavenport
Copy link
Member Author

I don't know everyone's GitHub names, so please do summon those that I haven't included!

@thockin @dcbw @jainvipin @lxpollitt @MikeSpreitzer

@adohe-zz
Copy link

adohe-zz commented Mar 4, 2016

@kubernetes/sig-network

@j3ffml j3ffml added sig/network Categorizes an issue or PR as relevant to SIG Network. team/cluster labels Mar 4, 2016
@errordeveloper
Copy link
Member

cc @monadic @rade @squaremo @bboreham

@adohe-zz
Copy link

adohe-zz commented Mar 7, 2016

/cc

@caseydavenport
Copy link
Member Author

I think there is a problem with the current spec and NodePort services.

The current spec allows traffic from a pod's host by default to allow for kubelet health checks. This opens a window where a pod might be in an isolated namespace but can still be accessed if selected by a NodePort service, and traffic is directed at <pods_host>:<nodeport>, thus circumventing the policy in place.

@MikeSpreitzer
Copy link
Member

Casey:
(1) I suspect you used angle bracket quotes and github mistook them for HTML markup, in your last sentence where you say "traffic is directed at :".
(2) I think the intent is that a Network Policy is about the relationship between the "origin client" and the "origin server" and if there is a (kube or other) proxy in between then that proxy has to play its part in implementing the policy. Or perhaps I do not understand the problem to which you are referring.

@MikeSpreitzer
Copy link
Member

(I posted these on the SIG mailing list recently, but since this now seems to be the right forum, I am repeating them here.)

I drafted example network policies, for the k8s guestbook example. Find them in https://review.openstack.org/#/c/290172/ .

While doing that I realized I did not know how to express the white-listing of connections to a given TCP port on a given pod from any client on the open Internet. So I supposed an extension in which a CIDR block can be used to identify the remote peer, and used 0.0.0.0/0 to allow from anybody. Which makes me a bit unhappy, because I know some operators that want to disallow connections from some parts of their platform to the workload containers. So now I am thinking about a layered situation where the k8s operator can impose some restrictions that the app developers/deployers can not contravene.

@danwinship
Copy link
Contributor

This opens a window where a pod might be in an isolated namespace but can still be accessed if selected by a NodePort service

FWIW, in OpenShift, isolation only applies to pod and service IP addresses; making your pod available on a node IP address (via NodePort, ExternalIP, HostPort, etc) explicitly bypasses isolation.

@caseydavenport
Copy link
Member Author

@MikeSpreitzer Thanks, fixed up my comment. It might be good enough to say that if you're using some sort of proxy (in this case NodePort) then you need to make sure it works with your policy. I guess my thought was that 1) we should document which of the standard k8s proxies are compatible and 2) it would be nice if the proxies that ship with k8s are compatible with policy.

If you take a look at the google doc linked in this issue, at the very bottom there is an example for allowing all traffic sources access to a specific protocol / port. It basically looks like this:

ingress:
    ports:
      - protocol: TCP
        port: 80

@MikeSpreitzer
Copy link
Member

@caseydavenport: thanks for the reference, I missed the example with no "from". I will have to update my impl outline, I had assumed that no "from" is the same as an empty "from".
I am not sure I would consider node port to be a "proxy". It seems more of an "alias" to me. I think the design intent is that an alias does not introduce an exception to the policy.

@MikeSpreitzer
Copy link
Member

I assume that a NodePort service is implementing using Docker port mapping --- and Docker has two available implementations of that: a userspace proxy and iptables hacking (deja vu all over again). The design intent is that a network policy is about the relationship between origin client and origin server regardless of whatever proxying and/or aliasing the platform puts in between them. So an implementation that works with Docker's userspace port mapping of NodePort services has to get in iptables bed with Docker :-(

@bprashanth
Copy link
Contributor

I assume that a NodePort service is implementing using Docker port mapping

kube-proxy opens the node port on every node in the cluster and sends traffic to the endpoint ips.
HostPort is still docker.

@caseydavenport
Copy link
Member Author

v1beta1 API has been merged to master as of #25638

@HuKeping
Copy link

HuKeping commented May 25, 2016

Hi, team:

I see the docs about the network-policy says that


type NetworkPolicyIngressRule struct {
...
        From *[]NetworkPolicyPeer `json:"from,omitempty"`
}

type NetworkPolicyPeer struct {
        // Exactly one of the following must be specified.

        // This is a label selector which selects Pods in this namespace.
        // This field follows standard unversioned.LabelSelector semantics.
        // If not provided, this selector selects no pods.
        // If present but empty, this selector selects all pods in this namespace.
        PodSelector *unversioned.LabelSelector `json:"podSelector,omitempty"`
...
        NamespaceSelector *unversioned.LabelSelector `json:"namespaceSelector,omitempty"`
}

I'm a little confused about the If present but empty, this selector selects all pods in this namespace,
what does the this namespace mean? For example:

podSelector:
 - tier: database
ingress:
  - ports:
    - port: 80
      protocol: TCP
    - port: 50
      protocol: UDP
    from:
      - pods:  # It is present but empty, this selector selects all pods in **WHICH** namespace?
      - namespaces:
          blah: bar

Does it mean the namespace with label blah:bar or the namespace the same as the selected pod which with label tier:database?

Thanks.

@caseydavenport
Copy link
Member Author

@HuKeping - "this namespace" refers to the namespace that is the parent of the NetworkPolicy.

So, if you create a NetworkPolicy in namespace "foo", podSelector can only match pods in namespace "foo".

@HuKeping
Copy link

HuKeping commented May 26, 2016

Get it , thanks @caseydavenport ,sorry for the dumb question again. For the Ingress field of structure NetworkPolicySpec:

type NetworkPolicySpec struct {
...
        PodSelector unversioned.LabelSelector `json:"podSelector"`
...
        // If this field is ****empty**** then this NetworkPolicy does not affect
        // ingress isolation.
        Ingress []NetworkPolicyIngressRule `json:"ingress,omitempty"`
}

I just want to make sure the empty means "not present" or "[]". If I implement a agent for the Network policy, how should I explain these two:

#exmpale 1:

kind: NetworkPolicy
apiVersion: net.alpha.kubernetes.io/v1beta1
metadata:
 name: allow-some 
 namespace:myns
spec:
 podSelector:
     matchLabels:
       role: frontend
                                      // Not present

And

#exmpale 2:

kind: NetworkPolicy
apiVersion: net.alpha.kubernetes.io/v1beta1
metadata:
 name: allow-some 
 namespace:myns
spec:
 podSelector:
     matchLabels:
       role: frontend
 ingress:                             // empty

I suppose one of them should be considered as "allow all traffic to pods in namespace myns with the lable matches role:frontend", so what about the another?

The docs also gives an example as follow:

kind: NetworkPolicy
apiVersion: extensions/v1beta1 
metadata:
  name: allow-all
spec:
  podSelector:    

It was considered as " Allow all traffic to all pods in this namespace." It seems that the ingress was not present but as the code comment mentioned above, it should be the behaviour when the ingress was empty. That's what makes me confused.

Thanks a lot.

@shouhong
Copy link

I tried K8S 1.3 Network Policy + Calico implementation. Once I set network isolation for a namespace, the pods in this namespace can only be accessed by the local K8S node. Then I can create Network Policies to make some pods in the namespace can be accessed by some other pods. All of these are fine.

Now the question is: is it possible to make some pods in the namespace can be accessed from other K8S nodes or even from the nodes outside of the K8S cluster? Thanks!

@MikeSpreitzer
Copy link
Member

@HuKeping: for the ingress field of a NetworkPolicy, being absent and being present as an empty list have the same semantics. They both mean that particular NetworkPolicy says nothing.

There was a bug in example 3 of https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/network-policy.md --- I have fixed it recently.

@MikeSpreitzer
Copy link
Member

@shouhong: Example 3 of https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/network-policy.md --- which originally had a bug but is now fixed --- shows one way to write a NetworkPolicy that allows all ingress regardless of source.

@shouhong
Copy link

shouhong commented Aug 17, 2016

Thanks! I tried as below and it works! Any source can access the selected pod's 8080 port.

ingress:
  - ports:
     - protocol: tcp
       port: 8080

But it seems there is a bug for multiple ports. Below specification does not work. My K8S version is 1.3.0.

ingress:
  - ports:
     - protocol: tcp
       port: 8080
     - protocol: http
       port: 80

@devurandom
Copy link

Are you sure http is a valid protocol?

@MikeSpreitzer
Copy link
Member

@devurandom found it. http is NOT a valid protocol.

@bgrant0607 bgrant0607 added the sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog. label Aug 30, 2016
@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@farcaller
Copy link

I'm looking forward to egress rules.

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@farcaller
Copy link

I want to limit network access of specific pods to specific destination ip addresses, in case if the application inside of the pod is compromised.

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@xeor
Copy link

xeor commented May 5, 2017

@thockin : sorry for being a little wage..

I'm currently experimenting with kubernetes in a small on-premise cluster using canal (calico/flannel) for networking.

Looking at the calico policy, there are at least a couple of things missing from kubernetes networkPolicy, which I already now (just started looking into network policies) am going to miss.

  • egress
  • define to, not just from
  • use network/subnet in rules as well as labels.
  • A way to define a deny and log action

Or is this something calico should provide as a 3rd party type?

@farcaller
Copy link

You want to apply out-going firewall rules, based on destination IP address (rather than hostname), varied by source pod?

I'd love to do it by host, but that brings a whole lot of additional complications, so it's fine if we start with IP-only.

@rikatz
Copy link
Contributor

rikatz commented May 5, 2017

@xeor About the 'to', you define in which object this rule applies. See the podSelector attribute here.

Anyway I don't think that a issue is the best place to look at this doubts, instead I would suggest you to use the Kubernetes Slack channel :)

Thanks

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@xeor
Copy link

xeor commented May 5, 2017

@thockin , by egress, at least for my case is about pod-to-outside based on (also in my case), a subnet. In short, I want a pod to not have access to certain parts of my network. Or a pod that only can get out on the internet.

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@lxpollitt
Copy link

Looking at the calico policy, there are at least a couple of things missing from kubernetes networkPolicy, which I already now (just started looking into network policies) am going to miss.

  • egress
  • define to, not just from
  • use network/subnet in rules as well as labels.
  • A way to define a deny and log action
    Or is this something calico should provide as a 3rd party type?

@xeor you should still be able to use Calico policy objects (specified via calicoctl) alongside k8s policy objects (specified va kubectl) today. So you don't "lose" these features. Support for egress policies via annotations and/or TPRs so you can use kubectl for those in the future is also planned.

@sege
Copy link

sege commented May 9, 2017

I need to understand what people want to achieve with egress.

  • pod-to-pod egress?
  • pod-to-outside egress?
  • based on target IP or based on hostname?

Mostly pod-to-outside egress in our case. We have a lot of pod-to-legacy-data sources. We can't implement SDN on those old sources and they might even be external sources. Both hostname and IP-based would be nice.

@rikatz
Copy link
Contributor

rikatz commented May 9, 2017

The same as @sege here.

Actually most of our egress traffic is to legacy servers (including but not limited to Oracle and Mainframes), based on hostnames and IP Addresses.

An additional case are systems that shall not communicate with Internet and should have a default denied Egress traffic.

We use calico by hand here to limit the egress traffic, and I know there's a PR in Calico to read namespaces annotations (or something like that) to automatically create those rules.

But I think the most correct path here is to support in Namespace annotation net.beta.kubernetes.io/network-policy an egress isolation also, and that the NetworkPolicy object supports both ingress and egress rule, and also objects containing IP addresses (and not only other kubernetes objects).

EDIT: The annotation of an egress isolation policy is independent of Kubernetes, and shall be interpreted by the policy-controller of each sollution :)

The thing here, I think is that the API Server support network-policy objects containing egress rules also.

Thanks!!

@danwinship
Copy link
Contributor

NetworkPolicy egress support is #26720 although it has stalled while we finalize NetworkPolicy v1...

I think my egress NetworkPolicy uses cases doc (https://docs.google.com/document/d/1-Cq_Y-yuyNklvdnaZF9Qngl3xe0NnArT7Xt_Wno9beg) covers all the things people have mentioned in the last few comments here.

@coresolve
Copy link

looks like #26720 closed due to inactivity. Are folks still working on it?

@caseydavenport
Copy link
Member Author

I think we'l re-open or open another if/when that becomes active again. For now, no one is actively working on it so it's probably right to close.

@ahmetb
Copy link
Member

ahmetb commented Jul 26, 2017

@caseydavenport Do you mind bringing the design proposal up to date: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network-policy.md. It appears like the NamespaceSpec proposal did not make it and the default-deny got implemented with selectors on NetworkPolicy. (Although, I am not sure if proposals are meant to be updated later on to reflect the actual implementation.)

@caseydavenport
Copy link
Member Author

I was under the impression that proposals were not living documents, though I could be wrong.

For example, not sure if egress policy will get a new proposal or a modification to the existing one.

@thockin any thoughts? Happy to update if we think that's the right process.

@ahmetb
Copy link
Member

ahmetb commented Jul 26, 2017

I would assume so. I was just reading the proposal to get more info about how the feature works. I realized some of the details of how the NetworkPolicy behaves lives only in the proposal and not in the docs. I’m hoping to study it further and release a comprehensive guide and update the documentation.

@phsiao
Copy link
Contributor

phsiao commented Nov 30, 2017

Want to see if now is a good time to resume the discussion around supporting hostname as peer for both ingress and egress. Our use case is for restricting both ingress and egress, and the hostname can returns multiple A records that is managed externally, for example CDN hostnames.

I suspect a major concern is how to handle consistency --- NPC and the pod may resolve different IPs for the same hostname if the DNS record return a random subnet of IPs, for example, S3, but I am also not convinced that is a norm. As a policy I think it is a reasonable option in addition to ipBlock.

@cmluciano
Copy link

@caseydavenport Is this issue stale?

@caseydavenport
Copy link
Member Author

@cmluciano yes, I think we should close this and use the features issue or new separate issues for other topics relating to NP.

/close

@caseydavenport
Copy link
Member Author

@phsiao could you open another feature request, or send an email on the sig-network mailing list to have that discussion?

I don't think this is the right place anymore.

@phsiao
Copy link
Contributor

phsiao commented Dec 6, 2017

Will do a feature request ticket.

@dmitris
Copy link

dmitris commented Mar 2, 2018

for reference, Network Policy proposal is at https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/network-policy.md (points to this ticket in References)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/network Categorizes an issue or PR as relevant to SIG Network. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
None yet
Development

No branches or pull requests