Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Route53 entries for AWS ELBs #21397

Closed
chbatey opened this issue Feb 17, 2016 · 26 comments
Closed

Route53 entries for AWS ELBs #21397

chbatey opened this issue Feb 17, 2016 · 26 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@chbatey
Copy link
Contributor

chbatey commented Feb 17, 2016

We have a working version of this that is implemented for the AWS cloud provider.

If the kubernetes.io/aws-lb-cname-zone label is set we create a cname in Route53 with the format: [service-name]-[namespace]-[hosted-zone].[zone]

We'll be creating a PR for this soon and wanted to get people's opinions (@justinsb).

@therc
Copy link
Member

therc commented Feb 18, 2016

Shouldn't this be part of an Ingress? I'm working on implementing one, as suggested by @bprashanth. My first target is SSL-serving load balancers, for services that are not of type LoadBalancer (otherwise either you end up with two ELBs for each service or the Ingress controller will fight with the controller manager to tweak the single ELB's listeners). Next, Route53 support. Then, managing a pool of nginx instances to implement L7 path routing, like GCE's Ingress controller does.

@therc
Copy link
Member

therc commented Feb 18, 2016

What I also meant to say is that even if this is merged, since some users might not care about setting up an Ingress and the controller that it requires, we should make sure that there are no surprising behaviours in any of the four possible permutations (with/without label x with/without Ingress). Also, should this be a label or an annotation?

@bprashanth
Copy link
Contributor

DNS feels like it belongs in Ingress, because you need the hostname in the cert, and to use for things like SNI. Whether we're going to fold L4 into Ingress and deprecate Type=Loadbalancer or not is something we've tabled thus far, but will probably get around to discussing for the next release (there are valid reasons to keep Services as simple backends and express higher level lb/dos/gslb/ingress-y concepts in the Ingress -- services are ill suited to express fanout, which is typicall what one wants with DNS+url paths).

That said, i'd recommend doing whatevers easier at this point (maybe a shared lib that we can re-use in the AWS ingress controller?). IMO this should be an annotation.

@jmastr
Copy link

jmastr commented Jun 10, 2016

@therc +1 Are there any updates about Route53 support? Thanks for the SSL ELB support btw.

@manojlds
Copy link

manojlds commented Oct 7, 2016

What's the best approach to setup private/public elbs with Route53? Currently I have to handle them via Terraform, which means that I cannot have any random service in k8s served via elbs and dns, but have to be predefined externally.

@timbunce
Copy link

Wishlist: Could the DNS name be templated? So rather than mandate one particular format, like [service-name]-[namespace]-[hosted-zone].[zone], people could use a template that suits their needs. For example {{.service}}-{{.namespace}}-{{.clustername}}.

@danielwhatmuff
Copy link

Has this been implemented? Are there any docs for the label mentioned above - kubernetes.io/aws-lb-cname-zone

@justinsb justinsb self-assigned this Nov 15, 2016
@sachincab
Copy link

Will this be part of 1.5 release?

@deitch
Copy link
Contributor

deitch commented Dec 19, 2016

Been looking for this for a while. It is great that k8s automatically sets up the ELB, but you needs to take a manual step to retrieve the assigned AWS hostname and then create a CNAME.

Any update?

@ghost
Copy link

ghost commented Dec 23, 2016

+1

@jsravn
Copy link
Contributor

jsravn commented Jan 9, 2017

I'm one of the original authors of the PR for this. If you really want you can try cherry picking the last commit in #23576.

But we've long ago moved to using ingress. As originally discussed, that's the "right way" in kubernetes to handle external DNS. Our WIP is at https://github.com/sky-uk/feed - to use, create an ELB, then run the ingress controller and dns controller to manage a hosted zone. Then external DNS entries are managed via ingress resources. There is a similar effort going on in kops and k8s (https://github.com/kubernetes/ingress/tree/master/controllers/nginx).

I still see the utility in this for small clusters (as you hit the ELB limits very quickly otherwise), and it shouldn't be too much work for someone to pick up the work done so far.

@deitch
Copy link
Contributor

deitch commented Jan 9, 2017

But we've long ago moved to using ingress. As originally discussed, that's the "right way" in kubernetes to handle external DNS.

@jsravn so how do you have it configured? Assuming you start a Service that can have anywhere from 1-1000s (or more) of pods in its deployment, what actually routes the traffic in from the external world, and how are you configuring it? Is the ingress controller just managing ELB, or do you have actual proxies (e.g. nginx) set up, and if so, what is connecting to them?

create an ELB, then run the ingress controller and dns controller to manage a hosted zone. Then external DNS entries are managed via ingress resources

... which sounds like you create a single ELB separately (manually or CFN or tf), but what does it connect to and how is it configured?

@pawelprazak
Copy link

Is there an ETA on this?

BTW, as a workaround I implemented a simple AWS Lambda with boto3 and CloudWatch Events triggering it on AddTag event.

@jsravn
Copy link
Contributor

jsravn commented Jan 13, 2017

@deitch controller manages an nginx instance which does virtual host routing. Can be scaled horizontally by attaching more to the single ELB.

@deitch
Copy link
Contributor

deitch commented Jan 13, 2017

@jsravn

controller manages an nginx instance which does virtual host routing. Can be scaled horizontally by attaching more to the single ELB.

ingress controller creates and manages one or more nginx instances that act as app load balancers in front of the service? But those, in turn, need routable IPs. Are those fronted by ELB?

@jsravn
Copy link
Contributor

jsravn commented Jan 13, 2017

@deitch yep ELB is the front-end.

@deitch
Copy link
Contributor

deitch commented Jan 13, 2017

yep ELB is the front-end
So traffic flows Internet --> ELB --> nginx --> pods?

  1. What do you gain by having nginx in here as opposed to ELB --> pods ? Is it just that k8s does a better job controlling nginx with an ingress controller, as opposed to trying to manage ELB directly?
  2. How do you have ELB configured to talk to nginx?
  3. Do you gain any benefits over NodePort if it is talking to nginx anyways?

Put it another way: when you spin up a new service, you need to spin up (or configure existing) nginx to handle the traffic, and how does ELB get configured then to route correctly to route traffic to nginx?

This thread started because LoadBalancer Service Type required a manual step to get the ELB dynamic URL and add it to DNS so client would know where to route. I think you eliminated some of that here, still (densely for me) am missing some of it?

@anguslees
Copy link
Member

Re @bprashanth's comment above: I was expecting to see at least one outraged comment above about how the Internet, Kubernetes, and DNS entries are not just for HTTP(S) and so DNS is not something that should be configured (only) in Ingress annotations, and something about the music choices of this younger generation.

Did I miss something? Oh look, now there is such a comment 😛

@deitch
Copy link
Contributor

deitch commented Feb 20, 2017

@jsravn my thinking actually has come around almost a complete 180. I no longer do automatic ELB (or anything else), unless I am doing non-http/https services.

I now have everything defined by Ingress, and use a Traefik or nginx ingress controller cluster-wide. I front it with a single ELB/ALB set to forward all subdomains of *.somesub.mydomain.com to the ingress controller's port(s). Since it is wildcard on the cloud load balancer, everything for config that matters ends up being handled by the ingress controller as configured by the Ingress.

The real reason I came around was portability. I wanted configs that could be dropped on minikube, uat, prod, AWS, GCE, etc. as easily as anywhere else. type=LoadBalancer made that difficult, adding route53 even more so.

With Ingress, I now have an almost-unified config for everywhere, making microservice development easy. I tell the dev, if your service is public-facing, add an Ingress, if not no; their ingress works wherever we send it.

@deitch
Copy link
Contributor

deitch commented Feb 20, 2017

something about the music choices of this younger generation.

LOL! I cannot complain, my kids all like classic rock in addition to more modern stuff. Warms my heart to hear my kid playing Eagles on the guitar or singing Beatles, Motown, etc.

If only I could get them to appreciate Mozart, Beethoven and Bach too....

@deitch
Copy link
Contributor

deitch commented Feb 26, 2017

With Ingress, I now have an almost-unified config for everywhere, making microservice development easy.

I spoke too soon. I got nabbed by the lack-of-wildcard limitation. The latest ingress supports *.mydomain.com as the Host: part of the ingress, but not subdomain.*. That makes it closely tied to the actual final domain, and therefore the config is non-portable between environments. Drat.

@dlouzan
Copy link

dlouzan commented Apr 28, 2017

How do you handle this case for Internal ELBs? I have a service running in k8s that I would like to make accessible to other internal non-k8s services, so I can't use the service names (those are only available in the k8s network). So for this I am creating a k8s LoadBalancer service using the beta annotation for aws internal elbs. The issue is that afterwards I have to manually create an internal Route53 entry pointing to the internal ELB, so that I can give a meaningful and stable DNS address to the clients of this service.

Any ideas? can this be done automatically by Ingress?

@deitch
Copy link
Contributor

deitch commented Apr 28, 2017

@dlouzan FWIW, I avoid ELB auto-management entirely. I have ELBs managed separately (mostly tf), and kube components only are responsible for kube.

I have a terraform kubernetes module I wrote that takes as arguments what input ports it needs and sets up ELBs and security groups and NACLs to support it. Not ideal, but I don't like mixing kubernetes with AWS ELB

@thockin thockin added the sig/network Categorizes an issue or PR as relevant to SIG Network. label May 19, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 24, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

openshift-publish-robot pushed a commit to openshift/kubernetes that referenced this issue Jan 28, 2019
UPSTREAM: 69008: improve pleg error msg when it has never been successful

Origin-commit: 93a560691bfb7628ee3f9e21f005ee52957d46ea
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests