Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secure intra-cluster communication with TLS #129

Closed
jbeda opened this issue Jun 17, 2014 · 7 comments
Closed

Secure intra-cluster communication with TLS #129

jbeda opened this issue Jun 17, 2014 · 7 comments
Assignees
Labels
area/security priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Milestone

Comments

@jbeda
Copy link
Contributor

jbeda commented Jun 17, 2014

Right now components within a cluster (etcd, for example) are accessed over non-encrypted channels. Ideally this would be secured.

This is an offshoot of #128.

@bgrant0607
Copy link
Member

@erictune @lavalamp Have we done anything about this or are we planning to do this soon?

@bgrant0607 bgrant0607 added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. status/closed/duplicate priority/backlog Higher priority than priority/awaiting-more-evidence. and removed status/closed/duplicate priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 3, 2014
@erictune
Copy link
Member

erictune commented Dec 4, 2014

Kubelets talk insecure TLS to apiserver on GCE, so that they can generate events.
Bugs are open to make that work on other cloud providers.
The minion self registration thing we talked about this afternoon would add secure TLS.

@davidopp davidopp added the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label Feb 17, 2015
@bgrant0607 bgrant0607 added this to the v1.0 milestone Feb 28, 2015
@a-robinson
Copy link
Contributor

This depends on #3168 to get the certs in place.

@alex-mohr
Copy link
Contributor

I think this will be entirely? done once #3168 is in place -- kubelets only talk to api server and will be secure. Ditto kubeproxy. Nodes won't use clustered-salt for setup, but rather per-node salt (for all providers?). I propose if there are remaining work items, we fork those to specific issues tracking them.

@alex-mohr alex-mohr added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Mar 19, 2015
@roberthbailey
Copy link
Contributor

I created a new cluster on GCE using default settings to do an audit of our current state of intra-cluster communication security.

I started by looking through the open ports (using netstat -an).

The master is listening on a variety of TCP ports, but many of these (7001, 10250, 10251, 10252, 2380, 8080) are only bound to the loopback address and thus wouldn't result in any communication traversing the network. Two ports (7080 and 6443) were open to the internal GCE network (10.240.x.x). And a handful of ports (22, 443, 4001, 4194) were open on all interfaces. As far as securing communications goes, ssh (22), https (443) and the read/write server on the master (6443) are all ok. There is no need for etcd (4001) or cAdvisor (4194) on the master to be reachable from elsewhere in the cluster and once we have secured master/node communication (#3168) we should be able to upgrade the read-only server on the master (7080) from http to https.

Each node is listening for tcp connections on ports 22 (ssh), 4194 (cAdvisor), 10250 (kubelet), and 10249 (kube-proxy healthz port) and all of these endpoints are open on all interfaces. The kubelet is currently contacted over http but will be upgraded to https as part of #3168. cAdvisor and kube-proxy are both serving http and should either be upgraded to https or only bind to localhost (if they don't need to be contacted remotely).

In addition to the open ports, I looked at the active network connections on the various VMs.

The nodes had established connections to the master on port 6443 (read-write) and 7080 (read-only) -- these communications should both be secured. All machines had connections to 169.254.169.254 on port 80 which is the local metadata server and doesn't leave the local machine (and is thus secure) -- this also isn't applicable for non-GCE deployments. The master had established connections to each node VM on port 10250 (kubelet) but I didn't see any connections to cAdvisor or the kube-proxy.

I also saw connections to two container IP addresses -- 10.244.2.2:8086 (influx grafana controller) and 10.244.3.3:9200 (elastic search logging controller) -- both of which are serving http connections. We are also sending udp packets to the cluster DNS service on port 53.

@erictune
Copy link
Member

  1. Fixing port 7080 and kubernetes-ro
    1. Fix kube-proxy to use 6443 instead of 7080: Move kube-proxy to use authed-port instead of readonly-port. #5917
    2. Fix things that use kubernetes-ro service and KUBERNETES_RO env vars, including kube2sky, elasticsearch, and several examples. Securing kubernetes-ro use cases #5921

@roberthbailey
Copy link
Contributor

I've split the remaining work out into smaller issues:

so I'm going to close this overarching issue.

vishh pushed a commit to vishh/kubernetes that referenced this issue Apr 6, 2016
Crazykev pushed a commit to Crazykev/kubernetes that referenced this issue Sep 10, 2016
xingzhou pushed a commit to xingzhou/kubernetes that referenced this issue Dec 15, 2016
Add client-python to sig-api-machinery's supported clients
lazypower pushed a commit to lazypower/kubernetes that referenced this issue May 23, 2017
* Fixing e2e charm to work with token auth

* Add support for e2e requesting elevated privs

Refactors e2e to use token auth, and to request the proper auth level
with that token.
iaguis pushed a commit to kinvolk/kubernetes that referenced this issue Feb 6, 2018
wking pushed a commit to wking/kubernetes that referenced this issue Jul 21, 2020
pjh pushed a commit to pjh/kubernetes that referenced this issue Jan 31, 2022
ncdc added a commit to ncdc/kubernetes that referenced this issue Feb 9, 2023
Support multiple CRDs/workspaces/versions
linxiulei pushed a commit to linxiulei/kubernetes that referenced this issue Jan 18, 2024
Add Build Status and GoReportCard badge
krunalhinguu pushed a commit to krunalhinguu/kubernetes that referenced this issue Jul 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests

7 participants