-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
service API access from within a pod #7698
Comments
On my cluster (launched from ~head today) running on GCE, this works fine both from a node in the cluster and from inside the
|
Mmh interesting - the same does not work from a cluster running on AWS & setup via the installer (wget -q -O - https://get.k8s.io | bash). Is there any provider-specific constraint accessing 10.0.0.2 ? Curling the RO service from within a pod was working great on 0.15 (including AWS) btw. |
I don't have access to an AWS cluster. Maybe @justinsb could test it out and see if he can reproduce the failure? |
This was recently broken, but I should have fixed it in #7678. Do you know if the version you're running includes that patch? |
Note that you have to use a bearer token now for auth (which you can get from a secrets volume whose name I can track down for you if you're not already doing this!) |
mh ok thanks for heads up ! I'll retry over the weekend - is the secrets volume goodness described somewhere in the docs ? I can hunt for it.. |
I found some docs here: #7979 It does sound like maybe this isn't the official way to get a bearer token! |
looks purr-fect to me - giving it a try.. thanks ! |
Mh i think i'm going to wait for #7101 to land instead. Sounds more like the way it should be. |
You should be using the env vars KUBERNETES_RO_SERVICE_HOST and KUBERNETES_RO_SERVICE_PORT, not the ip address directly-- IP address is not guaranteed to be that in every cluster. But we're deprecating that, anyway-- you already found #7101; also see #5921. (closing because the other issues mentioned cover this) |
It appears like I can't use the service API @ 10.0.0.2 (unauthorized even with proper auth + TLS on 0.15 and so far hanging on 0.16). At this point I reverted to using the master directly from within my pods, which is quite sub-optimal. Is there a plan to support access to the service API from within the cluster, typically when implementing a PaaS where pods have the ability to create other pods ?
The text was updated successfully, but these errors were encountered: