-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-4872: Harden Kubelet serving cert validation #4911
base: master
Are you sure you want to change the base?
KEP-4872: Harden Kubelet serving cert validation #4911
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: g-gaston The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: KubeletCertCNValidation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about: NodeCertificateNameValidation
?
The API server is verifying a certificate presented by a node, whatever software that node is running (which, OK, is very likely to be kubelet
).
Co-authored-by: Tim Bannister <tim@scalefactory.com>
|
||
Provided an actor with control of a node can impersonate another node, the impact would be: | ||
|
||
* Break confidentiality of the requests sent by the Kube-API server to the kubelet (e.g kubectl exec/logs).These are usually user-driven requests. That gives the threat actor the possibility of producing incorrect or mis-leading feedback. In the exec case, it could allow a threat actor to issue prompts for credentials. In addition, the exec commands might contain user secrets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this actor is already owning a node in the cluster then exec and logs does not need to impersonate a node, they just target pods in a node, and require the kubelet to terminate the connection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I get this one, sorry :(
Could you elaborate a bit?
Maybe this line needs a bit of clarification. What this attack breaks is the confidentiality of requests aimed nodes other than the node the attacker controls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what I tried to mean is that it if already owns the Node why it should try to impersonate it?
but IIUIC this is based on the idea of stealing a valid certificate of a node and own a node in the same network not part of the cluster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. It's about stealing the certificate from one node and being able to impersonate any other node that gets assigned the same IP for the TTL of the certificate.
This vulnerability can be exploited through ARP poisoning or other routing attacks, allowing a rogue node to obtain a certificate for an IP it does not own and reroute traffic to itself. | ||
|
||
When the Kube API server connects to a kubelet, it verifies that the serving certificate is signed by a trusted CA and that the IP or hostname it’s connecting to is included in the certificate's SANs. | ||
If a rogue node obtained a certificate for an IP it does not own and reroute traffic to itself, it would be able to impersonate a Node that reports that IP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you expand on the "reroute traffic to itself" problem?
I may not get all the full details of the scenario, nodeA with IP1 is removed but somehow actorB manages to get its certificate, spawn a nodeB with nodeA name? IP2 or IP1??? , join it to the cluster and set the address IP1 on node.status.addresses ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example:
- Node A has IP1 and gets a cert issues for IP1. Cert is valid for a year.
- An attacker gets control over Node A and gets this certificate.
- Node A is recycled.
- Eventually, a Node B is created and since IP1 is now free, it gets IP1 assigned.
- The attacker is still in the network, let's say with control of Machine C with IP2. It redirects traffic going to IP1 to Machine C, for example with an ARP poisoning attack, and uses the certificate it obtained from Node A, which is valid for IP1.
- If a exec/log request is performed aimed at a pod in Node B, the Kube-API server will trust Machine C to be Node B.
|
||
We will remove the metric once the feature is GA. | ||
|
||
> TODO: let's discuss this in the review. We could consider adding the node name to the metric or even keeping the metric post GA if it's valuable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
node name has the cardinality problem, but leaving the metric sounds like a good thing to me, it can help to detect issues with the node certificates due to bugs per example
#### Alpha | ||
|
||
* Add feature flag for gating usage, off by default | ||
* Add flag to disable extra validation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remember you need to add the flag only if the feature flag is enabled, otherwise if the feature does not progress removing the flag can not be possible without breaking the users that are already setting it
+1 sounds a great addition, simple and very effective |
|
||
## Proposal | ||
|
||
We propose that the Kube API server is modified to validate the Common Name (CN) of the kubelet's serving certificate is equal to `system:node:<nodename>`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you have a proof-of-concept of the wiring / wrapping for this validation? I poked around at the kube-apiserver → node network paths and didn't see an obvious spot to wire something at the connection level that had visibility to the node name and the serving certificate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I do. Take it as a first draft. For example, I'm still on the fence about caching/trying to cache this in the client-go transport package.
https://github.com/kubernetes/kubernetes/compare/master...g-gaston:kubernetes:kubelet-serving-cert-cn-validation-poc?expand=1
Ah also I haven't implemented feature gates or metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... thanks for the pointer... that's about as twisty as what I was afraid my exploration was going to turn into :-/
I think that approach is going to leak client transports in kube-apiserver... currently, there's a single transport for all kube-apiserver → kubelet connections. This approach looks like it will make a new transport per node, which gets held forever in the transport tlsCache ... so as nodes cycle in/out of a cluster, the kube-apiserver client cache will just grow unbounded :-/
we need to make sure we have a plausible way to hook in this validation that won't leak in that way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, that's why i was on the fence about the caching
The two alternatives I can see are to not cache these transports at all (either by making transports with custom cert validation uncachable or by allowing to mark a transport as uncachable and doing that explicitly for the ones we create) or to add a cache invalidation mechanism. Leaning towards the first one. The drawback is the overhead of creating a transport on the fly per request.
+1, I'm in favor of this, and the proposed surface area (normal feature gate progression, ability to disable via flag, metrics indicating incompatible node certs) makes sense to me would be good to get some details on the impl (#4911 (comment)) to make sure this is feasible, and then I'm happy to ack / iterate on the KEP details |
TODO
s with things I think can use a discussion.