-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network policy proposal #24154
Network policy proposal #24154
Conversation
type NetworkPolicySpec struct { | ||
// Selects the pods which this NetworkPolicy object | ||
// applies to. | ||
PodSelector map[string]string `json:"podSelector,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Must be new-style label selector. See Job, ReplicaSet, Deployment, or DaemonSet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applies to all selectors.
Besides the trivial comments I made on the commit, LGTM |
fdaf81c
to
1c097ec
Compare
namespace: | ||
spec: | ||
podSelector: // Standard label selector - selects pods. | ||
ingress: // List of ingress rules (optional). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Experiments have showed that a single selector per policy element leads to tons of policy elements which need to be maintained with each unique pod selection label combination requiring a separate object. In particular if the labels change, a ton of files need to be touched.
Allowing for both, specifying a selector per element and allowing to extend (or negate parts) on a per rule basis drastically reduced the number of required policy elements and allowed to group policy into logical groups easily consumable and understandable by humans :-)
I therefore suggest to extend the rule struct with an optional selector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC the previous discussion brought this up, and we explicitly wanted to create separate NP objects for each rule, to prevent having gigantic objects that weren't easily parseable. eg, more smaller NP objects instead of fewer much larger objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood and that might be preferred by some. What I propose does not prevent anybody from doing that. The additional per rule selector is optional. OTOH, the current proposal requires to have a separate object for every label combination. We started out with the same assumption and ended up with several hundred policy elements when converting an existing network policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, you're saying that withing a single Namespace you had O(hundreds) of distinct selectors? Did those selectors all have the same policies or were they unique {selector, policy} tuples?
From a practical POV, Kubernetes has a limit to the size of the serialized API objects, currently 1MB. That is a lot, but we don't want to encourage monolith objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the cross product of label combinations that blows it up, a simple example:
You have pods app=A
=> app=B
=> app=C
and you use the labels env={qa|prod|dev}
and tenant={Birds|Lizards}
to isolate the pods in a multi tenant environment. Maybe not the best example but it really applies to any scenario where you leverage multiple labels to isolate things from each other.
This would result in the following individual policy elements to be required to enforce it:
selector: app=B, env=qa, tenant=Birds
allow: app=A, env=qa, tenant=Birds
selector: app=C, env=qa, tenant=Birds
allow: app=B, env=qa, tenant=Birds
selector: app=B, env=prod, tenant=Birds
allow: app=A, env=prod, tenant=Birds
selector: app=C, env=prod, tenant=Birds
allow: app=B, env=prod, tenant=Birds
selector: app=B, env=dev, tenant=Birds
allow: app=A, env=dev, tenant=Birds
selector: app=C, env=dev, tenant=Birds
allow: app=B, env=dev, tenant=Birds
selector: app=B, env=qa, tenant=Lizards
allow: app=A, env=qa, tenant=Lizards
selector: app=C, env=qa, tenant=Lizards
allow: app=B, env=qa, tenant=Lizards
selector: app=B, env=prod, tenant=Lizards
allow: app=A, env=prod, tenant=Lizards
selector: app=C, env=prod, tenant=Lizards
allow: app=B, env=prod, tenant=Lizards
selector: app=B, env=dev, tenant=Lizards
allow: app=A, env=dev, tenant=Lizards
selector: app=C, env=dev, tenant=Lizards
allow: app=B, env=dev, tenant=Lizards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of what you're describing here would naturally be split into namespaces (de/prod and birds/lizards), so they must necessarily be different objects. The question then remains whether one NP object per "app" makes sense (in this case app={a,b,c}) or whether one NP object per selector. ISTR discussing this and deciding to go simple, but I think it is a fair concern.
@caseydavenport what do you think about adding an "open questions" section at the bottom and including this question?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just saw your comment further down regarding namespaces + pod selector which would resolve my main concern around using namespaces for tenancy which is how to allow exposing individual pods across tenants. I think you will still eventually end up with people defining cross products for constructs where namespaces is not a great fit. Initial users will probably be able to give appropriate feedback.
d943f8f
to
b77cbc2
Compare
- port: // Port on the specified protocol (optional). | ||
protocol: // Protocol (TCP, UDP) | ||
from: // List of allowed sources (optional). | ||
- pods: // Label selector - selects Pods (optional). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to define the semantics. Can I specify both pods and namespaces in a single entry? Or is this a one-of?
i.e.
from:
- pods: { role: fe }
namespaces: { tenant: coke }
vs
from:
- pods: { role: fe }
- namespaces: { tenant: coke }
The former sort of implies "and" but I don't think we are offering an "and" semantic at all. Do we need to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was clarified later. All the more reason to describe this as a Go struct with rigorous comments on each field.
@caseydavenport A couple nits from @aanm and then I am LGTM on this!! Ping me when pushed. |
ingress rule on those NetworkPolicy objects, per the network plugin’s capabilities. | ||
|
||
NetworkPolicy objects and the above namespace isolation both act on _connections_ rather than individual packets. That is to say that if traffic from pod A to pod B is allowed by the configured | ||
policy, then the return packets for that connection from B -> A are also allowed, even if policy is in place that would prevent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we do not have negative actions, how about
s/prevent/not allow/
or, more to the point,
s/if policy is in place that would prevent/if no policy is in place that would allow/
?
43249f7
to
4b0750a
Compare
""" | ||
if not pod.Namespace.Spec.NetworkPolicy.Ingress.Isolation: | ||
# If ingress isolation is disabled on the Namespace, all traffic is allowed. | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we agreed this should say to return clusterDefault(traffic, pod)
instead of True
e03a9a5
to
7495ebb
Compare
|
||
# Ingress isolation is DefaultDeny and no policies match the given pod and traffic. | ||
# Use the cluster default behavior. | ||
return clusterDefault(traffic, pod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the cluster default is what applies when namespace.networkPolicy.ingress.isolation is undefined; otherwise traffic is refused if it is neither for probing (health checking or lifecycle) or allowed by a NetworkPolicy.
If this is not clear yet, just consider this: where does this code ever return "False"?
7495ebb
to
f1c48b4
Compare
# If ingress isolation is disabled on the Namespace, all traffic is allowed. | ||
return True | ||
elif traffic.source == pod.node.kubelet: | ||
# Traffic is from kubelet health checks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or lifecycle probes, I presume
f1c48b4
to
b916b23
Compare
@googlebot I signed it! Signed as a corporation: Tigera |
b916b23
to
872d2b7
Compare
We don't have a corp CLA for Tigera yet. It may have been sent, but it's not in the database yet. Manually verified, but maybe poke whomever to make sure that is filed. |
I'm LGTM on this. nits can be fixed in followup |
GCE e2e build/test passed for commit 872d2b7. |
Automatic merge from submit-queue |
w00t! |
Yay! |
Automatic merge from submit-queue Add NetworkPolicy API Resource API implementation of #24154 Still to do: - [x] Get it working (See comments) - [x] Make sure user-facing comments are correct. - [x] Update naming in response to #24154 - [x] kubectl / client support - [x] Release note. ```release-note Implement NetworkPolicy v1beta1 API object / client support. ``` Next Steps: - UTs in separate PR. - e2e test in separate PR. - make `Ports` + `From` pointers to slices (TODOs in code - to be done when auto-gen is fixed) CC @thockin [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
This proposal attempts to add a v1beta API for network policy to Kubernetes, as discussed and designed within the networking SIG.
This change is