Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add well-known label for custom endpoint controller as well as "service.kubernetes.io/service-proxy-name" #87412

Closed
s1061123 opened this issue Jan 21, 2020 · 17 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@s1061123
Copy link

What would you like to be added:

Add well-known label at endpoint/endpointslice controller to delegate endpoint/endpointslice control to custom endpoint/endpointslice controllers.

Why is this needed:

kube-proxy has well-known label for custom proxy, "service.kubernetes.io/service-proxy-name" that delegates service control to custom proxy. With the label, kube-proxy does not generate iptables for the service. However endpoint/endpointslice controller does not have similar label, hence the issue address it.

@s1061123 s1061123 added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 21, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jan 21, 2020
@s1061123
Copy link
Author

/sig network

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 21, 2020
@athenabot
Copy link

/triage unresolved

Comment /remove-triage unresolved when the issue is assessed and confirmed.

🤖 I am a bot run by vllry. 👩‍🔬

@k8s-ci-robot k8s-ci-robot added the triage/unresolved Indicates an issue that can not or will not be resolved. label Jan 21, 2020
@robscott
Copy link
Member

/remove-triage unresolved
/assign

@k8s-ci-robot k8s-ci-robot removed the triage/unresolved Indicates an issue that can not or will not be resolved. label Jan 24, 2020
@uablrek
Copy link
Contributor

uablrek commented Feb 21, 2020

On the sig-network meeting 2020-02-20 a work-around using a webhook was mentioned. I have a simpler work-around that may be tested; Define the service without a selector then no endpoints will be assigned by kubernetes. Then add the selector as an annotation so your custom endpoint controller can set the endpoints in it's own way. Example;

apiVersion: v1
kind: Service
metadata:
  name: mconnect
  labels:
    service.kubernetes.io/service-proxy-name: my-lb-setup
  annotations:
    example.com/selector: mconnect
spec:
  ipFamily: IPv4
  ports:
  - port: 5001

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 20, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@uablrek
Copy link
Contributor

uablrek commented Oct 7, 2021

@s1061123 As discussed on the network-plumbing meeting:

If the proposal in this issue was implemented a Multus service would look something like;

apiVersion: v1
kind: Service
metadata:
  name: multus-service
  labels:
    service.kubernetes.io/service-proxy-name: multus-proxy
    service.kubernetes.io/endpoint-controller-name: multus-endpoint-controller
  annotations:
    multus.kubernetes.io/network: macvlan         # (forgot the name of this in the presentation)
spec:
  selector:
    app: multus-app
  ports:
  - port: 6000

I.e, you already would have 2 labels and 1 annotation making this a special (but clearly understandable) service.

What I propose is that you skip the selector altogether and replace it with an annotation, like:

apiVersion: v1
kind: Service
metadata:
  name: multus-service
  labels:
    service.kubernetes.io/service-proxy-name: multus-proxy
  annotations:
    multus.kubernetes.io/network: macvlan         # (forgot the name of this in the presentation)
    multus.kubernetes.io/selector: "app: multus-app"
spec:
  ports:
  - port: 6000

The multus-endpoint-controller would watch only services with label "service.kubernetes.io/service-proxy-name: multus-proxy" and add endpoints for the selector in "multus.kubernetes.io/selector" instead of the normal selector.

Now instead of having 2 labels and 1 annotation, you have 1 label and 2 annotations. Not a big difference IMHO.

A problem may be if someone wants multiple selectors, but I think that is a rare case.

@s1061123
Copy link
Author

s1061123 commented Oct 7, 2021

In this case, we need to make serialize selector into one line and it is not intuitive for user, and if it is for headless service, Kubernetes headless service has more feature as following, https://kubernetes.io/docs/concepts/services-networking/service/#headless-services hence we also need to care about it. It could need more time than I expect...

@uablrek
Copy link
Contributor

uablrek commented Oct 7, 2021

I have not seen any use of multiple selectors. A service request will the end in a random POD. The only use-case I can imagine is some canary test.

A records for any Endpoints that share a name with the Service, for all other types.

For headless services A records will be added for your added endpoints if they share the name with the service. Nothing more is needed from the multus-endpoint-controller.

@s1061123
Copy link
Author

s1061123 commented Oct 7, 2021

I don't have any clear solution for that, cover all headless service with multus interface with certain feasibility. So let's discuss about it later. For now, headless service is not supported status, so we can discuss about it in npwg meeting https://github.com/k8snetworkplumbingwg/community#meetings

@uablrek
Copy link
Contributor

uablrek commented Oct 7, 2021

Ok. Thanks for a good presentation anyway 👍

@uablrek
Copy link
Contributor

uablrek commented Oct 7, 2021

BTW, I have a blog post about "kpng" which I hope will make life easier for proxy-builders, please check a pre-view at; https://github.com/Nordix/website/blob/kpng-1/content/en/blog/_posts/2021-00-00-kpng-specialized-proxiers.md

@uablrek
Copy link
Contributor

uablrek commented Oct 7, 2021

My idea was not new and is already used in danm; https://github.com/nokia/danm/blob/master/schema/DanmService.yaml#L20

@s1061123
Copy link
Author

s1061123 commented Oct 7, 2021

hmm, I suppose your design does not conflict with my controller,so how about implement it as different controller?

@uablrek
Copy link
Contributor

uablrek commented Oct 8, 2021

I was merely suggesting a way of avoiding duplicated endpoints(-lices).

IMHO your proposal in this issue is sound and simple and no more "hackish" than the existing "service.kubernetes.io/service-proxy-name", in fact exactly the same. I am unsure why it was rejected, but I suppose nobody wants to touch anything while waiting for some future "class" field.

My focus is on load-balancing external traffic, but I found your approach in multus-service to make the load-balancing inside the PODs truly innovative 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants