-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing the apiserver from a container #7777
Comments
I am feeling very lucky. |
We need this for kube-scheduler, kube-controller, etc. in pods to talk to apiserver eventually in secure. Now they are in a same host without auth information. |
I'm in favor of by default mounting a token giving RO access to pods. Ideally, it'd be in a known place, such that if you package kubectl in your pod, it'll "just work", and if you use our go client library, there's a function you can call that also "just works". |
Any new mechanism would need to be considered in the broader context of what we're doing with auth. See #7101, for instance. @satnam6502 You have one container that needs to access N namespaces, or N pods, each in a different namespace? |
@bgrant0607 : N pods in one namespace e.g. N pods running Elasticsearch in the namespace I want to have a replication controller controlling N Elasticsearch pods in one namespace of course. These pods have a container than at start up does a LIST to discover all the other pods launched by this replication controller so they can do discovery to form a cluster. Specifically the Elasticsearch container will do a LIST for on pods with some user-provided label e.g. |
/cc @cjcullen |
Don't forget kubelet ;) |
@satnam6502 I like the |
If enough people like the |
In his demo on Friday, @liggitt showed off some of the results of #7101, which creates the serviceAccount object, and also auto-creates service accounts for each namespace. This wouldn't get you readonly on the authorization side, but it would be dead-simple to roll into a Pod spec. Maybe it makes sense to auto-create a read-only service account for each namespace as well? There was a reason the read-only endpoint was so popular. cc @liggitt to correct any misrememberings/misunderstandings I may be having about service accounts. |
Any reason not to do this by default? I think I need something like this to finish up #4567. |
The issue isn't the user (or token/secret, which #7101 will help auto create), it's connecting it to an authorizer rule that grants access to that user. The authentication layer is (appropriately) distinct from the authorization layer. There isn't dynamic modification of authorization rules in place yet. |
@liggitt : are you saying that I should not adjust our bring up scripts to add a secret in the default namespace containing auth information but instead use service accounts somehow? I want to update the Elasticsearch pod that is run as part of the GCE cluster bring-up process but the new version of my pod needs to speak to the apiserver and I don't want to use the soon to be deprecated read only port. |
@satnam6502 bootstrap tokens in bring-up scripts seem reasonable to me... I just couldn't see a kubectl command accomplishing that |
Right, I was proposing to add one :-) |
@satnam6502 Are you aware of headless services? You could create a service with PortalIP = None for your pods and then just query the endpoint API. Soon you should be able to get the addresses from DNS also (#6666). |
Yes, and I see that others have used a headless service for finding the endpoints to compose with for Elasticsearch. Once #6666 is in then I hope to use it instead. |
I think this issue is subsumed by #5921. |
I want to access the apiserver from a container and I don't want to use the soon to be deprecated read only port so I will need to provide auth information. I only need to read state information -- I don't need to PUT/POST anything -- I just need to list the running pods in the same namespace as the pod modulo label filters. So it seems like my best bet is to use secrets? I could store either the basic auth or the bearer token or the cert information in a secret. Every time I bring up a cluster I would have to manually create a secret which contained the clusters auth information. Furthermore, since secrets live in a namespace and can only be used in that namespace I would have to perform this step N times for N namespaces where I needed containers that have to make apiserver calls.
This seems odd to me. If I can issue a
kubectl
command to get a list of pods why can't I launch a pod that has a container that lists the pods running in its own namespace? I feel that access to the apiserver (even for reading information in the same namespace) ought to be easier than this.One way to do this might be to automatically create a secret in the default namespace which has the cluster auth information and allow this to be accessed by pods running in any namespace?
@pmorie @bgrant0607 @roberthbailey @smarterclayton @thockin @erictune @lavalamp @jlowdermilk
The text was updated successfully, but these errors were encountered: