-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Network Policy #22469
Comments
CC @philips |
I don't know everyone's GitHub names, so please do summon those that I haven't included! |
@kubernetes/sig-network |
/cc |
I think there is a problem with the current spec and NodePort services. The current spec allows traffic from a pod's host by default to allow for kubelet health checks. This opens a window where a pod might be in an isolated namespace but can still be accessed if selected by a NodePort service, and traffic is directed at |
Casey: |
(I posted these on the SIG mailing list recently, but since this now seems to be the right forum, I am repeating them here.) I drafted example network policies, for the k8s guestbook example. Find them in https://review.openstack.org/#/c/290172/ . While doing that I realized I did not know how to express the white-listing of connections to a given TCP port on a given pod from any client on the open Internet. So I supposed an extension in which a CIDR block can be used to identify the remote peer, and used 0.0.0.0/0 to allow from anybody. Which makes me a bit unhappy, because I know some operators that want to disallow connections from some parts of their platform to the workload containers. So now I am thinking about a layered situation where the k8s operator can impose some restrictions that the app developers/deployers can not contravene. |
FWIW, in OpenShift, isolation only applies to pod and service IP addresses; making your pod available on a node IP address (via NodePort, ExternalIP, HostPort, etc) explicitly bypasses isolation. |
@MikeSpreitzer Thanks, fixed up my comment. It might be good enough to say that if you're using some sort of proxy (in this case NodePort) then you need to make sure it works with your policy. I guess my thought was that 1) we should document which of the standard k8s proxies are compatible and 2) it would be nice if the proxies that ship with k8s are compatible with policy. If you take a look at the google doc linked in this issue, at the very bottom there is an example for allowing all traffic sources access to a specific protocol / port. It basically looks like this:
|
@caseydavenport: thanks for the reference, I missed the example with no "from". I will have to update my impl outline, I had assumed that no "from" is the same as an empty "from". |
I assume that a NodePort service is implementing using Docker port mapping --- and Docker has two available implementations of that: a userspace proxy and iptables hacking (deja vu all over again). The design intent is that a network policy is about the relationship between origin client and origin server regardless of whatever proxying and/or aliasing the platform puts in between them. So an implementation that works with Docker's userspace port mapping of NodePort services has to get in iptables bed with Docker :-( |
kube-proxy opens the node port on every node in the cluster and sends traffic to the endpoint ips. |
v1beta1 API has been merged to master as of #25638 |
Hi, team: I see the docs about the network-policy says that
I'm a little confused about the
Does it mean the namespace with label Thanks. |
@HuKeping - "this namespace" refers to the namespace that is the parent of the NetworkPolicy. So, if you create a NetworkPolicy in namespace "foo", |
Get it , thanks @caseydavenport ,sorry for the dumb question again. For the
I just want to make sure the #exmpale 1:
kind: NetworkPolicy
apiVersion: net.alpha.kubernetes.io/v1beta1
metadata:
name: allow-some
namespace:myns
spec:
podSelector:
matchLabels:
role: frontend
// Not present And #exmpale 2:
kind: NetworkPolicy
apiVersion: net.alpha.kubernetes.io/v1beta1
metadata:
name: allow-some
namespace:myns
spec:
podSelector:
matchLabels:
role: frontend
ingress: // empty I suppose one of them should be considered as "allow all traffic to pods in namespace The docs also gives an example as follow: kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-all
spec:
podSelector: It was considered as " Allow all traffic to all pods in this namespace." It seems that the Thanks a lot. |
I tried K8S 1.3 Network Policy + Calico implementation. Once I set network isolation for a namespace, the pods in this namespace can only be accessed by the local K8S node. Then I can create Network Policies to make some pods in the namespace can be accessed by some other pods. All of these are fine. Now the question is: is it possible to make some pods in the namespace can be accessed from other K8S nodes or even from the nodes outside of the K8S cluster? Thanks! |
@HuKeping: for the There was a bug in example 3 of https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/network-policy.md --- I have fixed it recently. |
@shouhong: Example 3 of https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/network-policy.md --- which originally had a bug but is now fixed --- shows one way to write a |
Thanks! I tried as below and it works! Any source can access the selected pod's 8080 port.
But it seems there is a bug for multiple ports. Below specification does not work. My K8S version is 1.3.0.
|
Are you sure |
@devurandom found it. |
"more powerful" isn't helpful. Can you say more about what actual problems
you are trying to solve?
…On Fri, May 5, 2017 at 3:39 AM, Lars Solberg ***@***.***> wrote:
Is there any time estimate when we will get more powerful networkPolicy in
kubernetes, or is this getting dragged behind because of 3rd party
solutions like what calicoctl gives us?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22469 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVAYHwGLFTx7wPqb7fgAIUr3DXbgQks5r2vxzgaJpZM4Ho4VG>
.
|
I'm looking forward to egress rules. |
Can you explain what you hope to achieve, rather than the mechanism you
think you might achieve it by? As concretely as possible, please.
…On Fri, May 5, 2017 at 8:49 AM, Vladimir Pouzanov ***@***.***> wrote:
I'm looking forward to egress rules.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22469 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVHuIEEg5My5sedYdyBK582p34oe3ks5r20UEgaJpZM4Ho4VG>
.
|
I want to limit network access of specific pods to specific destination ip addresses, in case if the application inside of the pod is compromised. |
You want to apply out-going firewall rules, based on destination IP address
(rather than hostname), varied by source pod? Is that right? Usually when
we dig into this, people want hostnames (e.g. github.com), not IP addresses.
…On Fri, May 5, 2017 at 9:39 AM, Vladimir Pouzanov ***@***.***> wrote:
I want to limit network access of specific pods to specific destination ip
addresses, in case if the application inside of the pod is compromised.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22469 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVGThSlWJwBN_N3vFWHefyWVMVg9Nks5r21DGgaJpZM4Ho4VG>
.
|
@thockin : sorry for being a little wage.. I'm currently experimenting with kubernetes in a small on-premise cluster using Looking at the calico policy, there are at least a couple of things missing from kubernetes networkPolicy, which I already now (just started looking into network policies) am going to miss.
Or is this something |
I'd love to do it by host, but that brings a whole lot of additional complications, so it's fine if we start with IP-only. |
On Fri, May 5, 2017 at 12:18 PM, Lars Solberg ***@***.***> wrote:
@thockin : sorry for being a little wage..
I'm currently experimenting with kubernetes in a small on-premise cluster using canal (calico/flannel) for networking.
Looking at the calico policy, there are at least a couple of things missing from kubernetes networkPolicy, which I already now (just started looking into network policies) am going to miss.
egress
I need to understand what people want to achieve with egress.
- pod-to-pod egress?
- pod-to-outside egress?
- based on target IP or based on hostname?
define to, not just from
We have "to" in the form of podSelector
use network/subnet in rules as well as labels.
"from CIDR" has been proposed, we just wanted to GA before adding more.
A way to define a deny and log action
logging is not a universal feature, so we can't easily add it to
NetPolicy, I think.
… Or is this something calico should provide as a 3rd party type?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
On Fri, May 5, 2017 at 12:29 PM, Vladimir Pouzanov ***@***.***> wrote:
You want to apply out-going firewall rules, based on destination IP address (rather than hostname), varied by source pod?
I'd love to do it by host, but that brings a whole lot of additional complications, so it's fine if we start with IP-only.
You're in the vast minority of people I speak to. Digging into
use-cases it is almost always of the form "allow pod to hit github but
nothing else".
|
@thockin , by egress, at least for my case is about pod-to-outside based on (also in my case), a subnet. In short, I want a pod to not have access to certain parts of my network. Or a pod that only can get out on the internet. |
I want a pod to not have access to certain parts of my network
Thanks. That is much clearer.
…On Fri, May 5, 2017 at 2:39 PM, Lars Solberg ***@***.***> wrote:
@thockin <https://github.com/thockin> , by egress, at least for my case
is about pod-to-outside based on (also in my case), a subnet. In short, I
want a pod to not have access to certain parts of my network. Or a pod that
only can get out on the internet.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22469 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVHPRQs3f4I2D2Lt0paGhjpmL7T5dks5r25cqgaJpZM4Ho4VG>
.
|
@xeor you should still be able to use Calico policy objects (specified via calicoctl) alongside k8s policy objects (specified va kubectl) today. So you don't "lose" these features. Support for egress policies via annotations and/or TPRs so you can use kubectl for those in the future is also planned. |
Mostly pod-to-outside egress in our case. We have a lot of pod-to-legacy-data sources. We can't implement SDN on those old sources and they might even be external sources. Both hostname and IP-based would be nice. |
The same as @sege here. Actually most of our egress traffic is to legacy servers (including but not limited to Oracle and Mainframes), based on hostnames and IP Addresses. An additional case are systems that shall not communicate with Internet and should have a default denied Egress traffic. We use calico by hand here to limit the egress traffic, and I know there's a PR in Calico to read namespaces annotations (or something like that) to automatically create those rules. But I think the most correct path here is to support in Namespace annotation EDIT: The annotation of an egress isolation policy is independent of Kubernetes, and shall be interpreted by the policy-controller of each sollution :) The thing here, I think is that the API Server support network-policy objects containing egress rules also. Thanks!! |
NetworkPolicy egress support is #26720 although it has stalled while we finalize NetworkPolicy v1... I think my egress NetworkPolicy uses cases doc (https://docs.google.com/document/d/1-Cq_Y-yuyNklvdnaZF9Qngl3xe0NnArT7Xt_Wno9beg) covers all the things people have mentioned in the last few comments here. |
looks like #26720 closed due to inactivity. Are folks still working on it? |
I think we'l re-open or open another if/when that becomes active again. For now, no one is actively working on it so it's probably right to close. |
@caseydavenport Do you mind bringing the design proposal up to date: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network-policy.md. It appears like the NamespaceSpec proposal did not make it and the default-deny got implemented with selectors on NetworkPolicy. (Although, I am not sure if proposals are meant to be updated later on to reflect the actual implementation.) |
I was under the impression that proposals were not living documents, though I could be wrong. For example, not sure if egress policy will get a new proposal or a modification to the existing one. @thockin any thoughts? Happy to update if we think that's the right process. |
I would assume so. I was just reading the proposal to get more info about how the feature works. I realized some of the details of how the NetworkPolicy behaves lives only in the proposal and not in the docs. I’m hoping to study it further and release a comprehensive guide and update the documentation. |
Want to see if now is a good time to resume the discussion around supporting hostname as peer for both ingress and egress. Our use case is for restricting both ingress and egress, and the hostname can returns multiple A records that is managed externally, for example CDN hostnames. I suspect a major concern is how to handle consistency --- NPC and the pod may resolve different IPs for the same hostname if the DNS record return a random subnet of IPs, for example, S3, but I am also not convinced that is a norm. As a policy I think it is a reasonable option in addition to ipBlock. |
@caseydavenport Is this issue stale? |
@cmluciano yes, I think we should close this and use the features issue or new separate issues for other topics relating to NP. /close |
@phsiao could you open another feature request, or send an email on the sig-network mailing list to have that discussion? I don't think this is the right place anymore. |
Will do a feature request ticket. |
for reference, Network Policy proposal is at https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/network-policy.md (points to this ticket in References) |
Raising this issue to track the status of network policy in Kubernetes so it isn't hidden within the network SIG.
Current rough plan:
v1beta API is outlined here: https://github.com/kubernetes/kubernetes/pull/24154/files
and here: https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit
Some future considerations and discussion points:
The text was updated successfully, but these errors were encountered: