Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

service API access from within a pod #7698

Closed
opaugam opened this issue May 4, 2015 · 10 comments
Closed

service API access from within a pod #7698

opaugam opened this issue May 4, 2015 · 10 comments
Labels
kind/support Categorizes issue or PR as a support question. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@opaugam
Copy link

opaugam commented May 4, 2015

It appears like I can't use the service API @ 10.0.0.2 (unauthorized even with proper auth + TLS on 0.15 and so far hanging on 0.16). At this point I reverted to using the master directly from within my pods, which is quite sub-optimal. Is there a plan to support access to the service API from within the cluster, typically when implementing a PaaS where pods have the ability to create other pods ?

@roberthbailey roberthbailey added priority/backlog Higher priority than priority/awaiting-more-evidence. kind/support Categorizes issue or PR as a support question. team/cluster labels May 5, 2015
@roberthbailey
Copy link
Contributor

On my cluster (launched from ~head today) running on GCE, this works fine both from a node in the cluster and from inside the fluentd-elasticsearch:1.5 container running on the node:

$ sudo docker exec -i -t 981869 bash
root@fluentd-elasticsearch-kubernetes-minion-685r:/# curl --insecure -H "Authorization: Bearer REDACTED" https://10.0.0.2/validate
[
  {
    "component": "controller-manager",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  },
  {
    "component": "scheduler",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  },
  {
    "component": "etcd-0",
    "health": "success",
    "msg": "{\"action\":\"get\",\"node\":{\"dir\":true,\"nodes\":[{\"key\":\"/registry\",\"dir\":true,\"modifiedIndex\":3,\"createdIndex\":3}]}}\n",
    "err": "nil"
  },
  {
    "component": "node-0",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  },
  {
    "component": "node-1",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  },
  {
    "component": "node-2",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  },
  {
    "component": "node-3",
    "health": "success",
    "msg": "ok",
    "err": "nil"
  }
]

@opaugam
Copy link
Author

opaugam commented May 5, 2015

Mmh interesting - the same does not work from a cluster running on AWS & setup via the installer (wget -q -O - https://get.k8s.io | bash).

Is there any provider-specific constraint accessing 10.0.0.2 ? Curling the RO service from within a pod was working great on 0.15 (including AWS) btw.

@roberthbailey
Copy link
Contributor

I don't have access to an AWS cluster. Maybe @justinsb could test it out and see if he can reproduce the failure?

@justinsb
Copy link
Member

justinsb commented May 8, 2015

This was recently broken, but I should have fixed it in #7678. Do you know if the version you're running includes that patch?

@justinsb
Copy link
Member

justinsb commented May 8, 2015

Note that you have to use a bearer token now for auth (which you can get from a secrets volume whose name I can track down for you if you're not already doing this!)

@opaugam
Copy link
Author

opaugam commented May 8, 2015

mh ok thanks for heads up ! I'll retry over the weekend - is the secrets volume goodness described somewhere in the docs ? I can hunt for it..

@justinsb
Copy link
Member

justinsb commented May 9, 2015

I found some docs here: #7979

It does sound like maybe this isn't the official way to get a bearer token!

@opaugam
Copy link
Author

opaugam commented May 9, 2015

looks purr-fect to me - giving it a try.. thanks !

@opaugam
Copy link
Author

opaugam commented May 11, 2015

Mh i think i'm going to wait for #7101 to land instead. Sounds more like the way it should be.

@lavalamp
Copy link
Member

service API @ 10.0.0.2

You should be using the env vars KUBERNETES_RO_SERVICE_HOST and KUBERNETES_RO_SERVICE_PORT, not the ip address directly-- IP address is not guaranteed to be that in every cluster.

But we're deprecating that, anyway-- you already found #7101; also see #5921.

(closing because the other issues mentioned cover this)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

4 participants