-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secure intra-cluster communication with TLS #129
Comments
Kubelets talk insecure TLS to apiserver on GCE, so that they can generate events. |
This depends on #3168 to get the certs in place. |
I think this will be entirely? done once #3168 is in place -- kubelets only talk to api server and will be secure. Ditto kubeproxy. Nodes won't use clustered-salt for setup, but rather per-node salt (for all providers?). I propose if there are remaining work items, we fork those to specific issues tracking them. |
I created a new cluster on GCE using default settings to do an audit of our current state of intra-cluster communication security. I started by looking through the open ports (using The master is listening on a variety of TCP ports, but many of these (7001, 10250, 10251, 10252, 2380, 8080) are only bound to the loopback address and thus wouldn't result in any communication traversing the network. Two ports (7080 and 6443) were open to the internal GCE network (10.240.x.x). And a handful of ports (22, 443, 4001, 4194) were open on all interfaces. As far as securing communications goes, ssh (22), https (443) and the read/write server on the master (6443) are all ok. There is no need for etcd (4001) or cAdvisor (4194) on the master to be reachable from elsewhere in the cluster and once we have secured master/node communication (#3168) we should be able to upgrade the read-only server on the master (7080) from http to https. Each node is listening for tcp connections on ports 22 (ssh), 4194 (cAdvisor), 10250 (kubelet), and 10249 (kube-proxy healthz port) and all of these endpoints are open on all interfaces. The kubelet is currently contacted over http but will be upgraded to https as part of #3168. cAdvisor and kube-proxy are both serving http and should either be upgraded to https or only bind to localhost (if they don't need to be contacted remotely). In addition to the open ports, I looked at the active network connections on the various VMs. The nodes had established connections to the master on port 6443 (read-write) and 7080 (read-only) -- these communications should both be secured. All machines had connections to 169.254.169.254 on port 80 which is the local metadata server and doesn't leave the local machine (and is thus secure) -- this also isn't applicable for non-GCE deployments. The master had established connections to each node VM on port 10250 (kubelet) but I didn't see any connections to cAdvisor or the kube-proxy. I also saw connections to two container IP addresses -- 10.244.2.2:8086 (influx grafana controller) and 10.244.3.3:9200 (elastic search logging controller) -- both of which are serving http connections. We are also sending udp packets to the cluster DNS service on port 53. |
|
I've split the remaining work out into smaller issues:
so I'm going to close this overarching issue. |
Handle empty stats in the UI.
grpc for StartPod
Add client-python to sig-api-machinery's supported clients
* Fixing e2e charm to work with token auth * Add support for e2e requesting elevated privs Refactors e2e to use token auth, and to request the proper auth level with that token.
Support multiple CRDs/workspaces/versions
Add Build Status and GoReportCard badge
…atches [release v1.27] k8s v1.27.12
Right now components within a cluster (etcd, for example) are accessed over non-encrypted channels. Ideally this would be secured.
This is an offshoot of #128.
The text was updated successfully, but these errors were encountered: