-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically create a secret in the default namespace that contains cluster access auth info #7979
Comments
I see that we already have:
|
For GCE/GKE we do (@erictune added them to cluster/gce/configure-vm.sh). |
Note that these are all just bearer tokens and they don't contain the CA root cert which we need to distribute for #7964. |
#7101 will get automatically get you bearer tokens for service accounts, but that's it. There was discussion about following that with adding a kubeconfig key to the secret which bundled the ca cert and the token together |
Let's not use the admin account for pods. Secrets are for pods. So let's
|
I created |
Let me look into this and see if I can switch to |
What's the status on this? It seems that tokens are a big issue for many people but I guess a lot of it has to do with doing upgrades. Do we think this issue is something we want to do? If not, let's close it out. |
I can't help thinking that we will keep wanting to make system pods that want to make Kuebernetes API calls but perhaps we should wait until there is a clear demand for this scenario. |
Doesn't service accounts handle this? On Fri, Jun 5, 2015 at 12:42 AM, Satnam Singh notifications@github.com
|
Yes. Room for improvement (including the CA or a kubeconfig with the token as well), but yes... that's how pods get API credentials. |
When a cluster is created automatically make a
kubernetes-auth
secret that contains the basic auth, bearer token auth and certs to allow applications running inside pods to access the apiserver etc.The text was updated successfully, but these errors were encountered: