Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Services backed by DaemonSet pods should have a hostname based network identifier #41977

Closed
fabiand opened this issue Feb 23, 2017 · 17 comments
Closed
Labels
area/workload-api/daemonset lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@fabiand
Copy link
Contributor

fabiand commented Feb 23, 2017

There should be a way to request that the names of Pods associated to a DaemonSet will carry a pod name which is equal to the hostname.

This should make it easier to address the specific pods using kubedns.

I.e currently a podname os a DS is something like foo-x15nb which leads to the fqdn foox15nb.myservice.mycluster.svc.

But to make it predictive it would be good if the hostname was used, then it's easier to lookup a psecific service on a specific host: thefoohost.myservice.mycluster.svc.

@0xmichalis 0xmichalis added sig/apps Categorizes an issue or PR as relevant to SIG Apps. area/workload-api/daemonset labels Feb 24, 2017
@0xmichalis
Copy link
Contributor

@kubernetes/sig-apps-feature-requests

@kow3ns
Copy link
Member

kow3ns commented Feb 25, 2017

Are you asking for the Pod to have the exact same name as the node? If so you could only have one DeamonSet per namespace that has this property without causing a name collision.

@0xmichalis
Copy link
Contributor

It would also block cases like the first point in #31693 (comment)

@0xmichalis
Copy link
Contributor

It would also block cases like the first point in #31693 (comment)

OTOH, currently we assume we always run at most one replica per node so this would help in that regard.

@fabiand
Copy link
Contributor Author

fabiand commented Feb 27, 2017

@kow3ns you are right, that's borked.

My main point is that there should be away how daemonset pods can be addressed per host through DNS.

Considering that skydns is used, then the scheme is (as you probably know) $host.$service.$namespace.cluster.local.
But in the case of daemonsets the $host part i sgenerated, thus you can not directly addressed that specific pod, without out-of-band knowledge about the generated name of the pod.
To fix this, a mechanism is required to fixate the hostname for daemonset pods.

@0xmichalis
Copy link
Contributor

Could be dsname-hostname

@fabiand
Copy link
Contributor Author

fabiand commented Feb 27, 2017

Where would you use it, @Kargakis ?

Eventually a flag on the daemonset spec could help to signal that the real hostname shoul dbe used as a pod hostname, i.e. hostsHostname: true.
Once the pod is initiated, the pod's hostname field would be set to the hostname.

@smarterclayton
Copy link
Contributor

smarterclayton commented Feb 27, 2017 via email

@fabiand fabiand closed this as completed Feb 27, 2017
@fabiand fabiand reopened this Feb 27, 2017
@fabiand
Copy link
Contributor Author

fabiand commented Feb 27, 2017

Appologies. Wrong button.

@smarterclayton
Copy link
Contributor

smarterclayton commented Feb 27, 2017 via email

@johscheuer
Copy link
Contributor

Hi,

I tried a naive approach with the following spec:

apiVersion: extensions/v1beta1
kind: DaemonSet
spec:
  template:
    spec:
      hostname:
        valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
      containers:
        - name: test
          image: "ubuntu:16.04"

but when I try to use this spec I get the following error (even if the fieldpath return a string https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1275:

error: error validating "test.yaml": error validating data: expected type string, for field spec.template.spec.hostname, got map; if you choose to ignore these errors, turn validation off with --validate=false

Is there a way to tell the downward API to return a string/single value?

@fabiand fabiand changed the title DaemonSet pods should have pod name == hostname Services backed by DaemonSet pods should have a hostname based network identifier Mar 31, 2017
@fabiand
Copy link
Contributor Author

fabiand commented Mar 31, 2017

Let me rephrase the issue, as it might not be clear from my initial statement.

Whenever a Pod is exposing a service, then the service exposed by a specific pod can be accessed using the fqdn specifc-pod-name.my-service.my-namespace.svc.cluster.local.
For regular pods this is okay, as the name specific-pod-name is known in advance and can be used to reference to specific pod.
However, in the case of daemon sets the name of a pod is not known in advance, and thus we cna not reference a specific pod.
For daemon sets in particular, the host plays an important role, as daemon sets can be considered to be node local services. Any other pod on a node might be interested to access a specific service on it's own node (for several reasons, one is to avoid network latency by accessing a remote pod providing the same service).

Thus to allow a pod to access a specific service on it's own node, it would be helpful if a daemonset pod can be directly addressed using the service, namespace, and - because it's especially interesting in the daemonset case - the hostname.
I.e.: <hostname>.<service>.<namespace>.svc.cluster.local.
With such an address it would be possible for pods to directly access a specific service on a given node.

There could be collisions if a service is bound to more than one pod on a node. I wonder if this could be somehow avoided for the daemonset case.

@0xmichalis
Copy link
Contributor

@fabiand have you looked at the proposal for node-local services?
#28637

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2018
@fabiand
Copy link
Contributor Author

fabiand commented Jan 22, 2018

Would still be important to me.

Might be addressed by #41442

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/workload-api/daemonset lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests

7 participants