-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Securing kubernetes-ro use cases #5921
Comments
@deads2k - I thought we were trying to get rid of kubernetes_auth files ;-) |
@LiGgit - FYI |
.kubeconfig files can be made portable so they could serve the same purpose. Using .kubeconfig files means that they can be manipulated and inspected using the command line tooling we have already developed. They also make it easy for users to be sure that the credentials they are going to provide to pods have the permissions they expect. The user simply creates the .kubeconfig file and runs the normal I know there's been some difficulty understanding the merge order and the like, but a single self-container .kubeconfig file that holds all the secrets with a single set of entries isn't too bad to reason about. |
Do we really need breakage? Can't we just define a different port or On Thu, Mar 26, 2015 at 1:56 PM, David Eads notifications@github.com
|
Another alternative: Instead of creating a proxy on each node, create a proxy that we build into a docker image that can (optionally) be deployed in a pod along with a service that exposes a read-only http endpoint. This gives folks an easy transition path from http -> https but makes the default cluster installation only use secure communications to the master. |
@liggitt FYI (take 2, hopefully spelled correctly this time) |
@cjcullen and anyone else following along: Make secrets at cluster startup. #5470 is in.
It also makes secrets at cluster startup in the default namespace with names:
You can see these if you start a cluster and then So, dns needs to use the
Then, do the same for Then, decide if it is worthwhile to do:
If we don't change those, the demos won't work on on a system without port 7080. We maybe don't need to use a different token for every component. We could just define a "default pod user", and make a token for that user, and use that token in all of the above examples. |
#7154 updated skydns/kube2sky to use port 6443 w/ the token-system-dns. I'm taking a look at fluentd/elasticsearch now. |
It looks like fluentd/elasticsearch doesn't actually use kubernetes-ro. Am I missing something? |
I can't find fluend/elasticsearch either now. |
@vishh changed heapster to support kubernetes_auth files. https://github.com/GoogleCloudPlatform/heapster/pull/232/files |
For the examples that use KUBERNETES_RO, would it make sense to create a readonly user/token pair? We'd need to get the policy right to block that user from reading other secrets... #7101 might make this easier: If we had service accounts, we could just have demos create a "my-demo-sa" service account, and then create a pod that used that service account. |
@vishh since you made that change, we moved the bar, so another PR is needed. |
How it relates to #4567? |
"Readonly" isn't actually a particularly useful protection since you can read a secret that contains a credential for accessing in a readwrite fashion. So, I suggest that we deprecate the readonly port, and just use the per-namespace default service account (#7101) as the way to access the apiserver in all examples and contribs. We can leave it unspecified what permissions this account has (currently read-write, but evolving once the Policy feature is merged). |
I'm gonna say that #9233 will fix this; there may be more, but this is the last piece I'm aware of. |
I think this is fixed. |
We'd like all communications to the apiserver to be over https and with authorization.
Kubernetes-ro does not meet these standards.
This bug is about things that use the kubernetes-ro service or the KUBERNETES_RO_* env vars.
kube-proxy is discussed separately in #5917 since it explicitly uses port explicitly 7080 and has a straightforward fix.
Existing Uses
Use cases for kubernetes-ro
Backwards Compatibile or Breakage?
In order to make this be https, we have to break some uses cases, because some existing use cases assume it is http. So, we could either have a breakage day, or have it be configurable at cluster setup time, and just move GCE/GKE to the https option. The latter seems expedient but it would force some code like kube2sky and elasticsearch to handle both options. Better to just break it.
Decision: breakage.
Credential distribution
We need to distribute credentials to clients. The use cases are in pods, so we need a way to distribute credentials to all pods that need them. Credentials would probably be in the form of a kubernetes_auth file.
We could use
secrets
to distribute the kubernetes_auth files. This does make some of the pod descriptions quite a bit mode verbose. But we can address that later using service accounts or config templates/expansions.credential creation
secret
object.cluster/saltbase/salt/kube-addons/kube-addons.sh
.credential use
--kubernetes_auth
flag.curl
to extract the auth token from the kubernetes_authCleanup
As an initial step, every cluster should be built with a secret with user
readonly-user
Alternative
Run a proxy on each node which listens on each netns's localhost address, or one some other address visible to all netnses. Listen for http. Proxy via HTTPS to the master. See #2209
Advantages:
Disadvantages:
The text was updated successfully, but these errors were encountered: