Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service status should include info about cloud load-balancer #18451

Closed
thockin opened this issue Dec 9, 2015 · 7 comments
Closed

Service status should include info about cloud load-balancer #18451

thockin opened this issue Dec 9, 2015 · 7 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@thockin
Copy link
Member

thockin commented Dec 9, 2015

We should find a way to jam info about the cloud load-balancer that is backing a service into the k8s API. Ideally the LB name and maybe a URL. This info is obviously going to vary by cloud provider. Maybe something simple like a string-string map in status.ingress[*] ?

@arob for ideas
@justinsb I think we discussed this but did not implement it before 1.0

@ArtfulCoder
Copy link
Contributor

We currently have coupled creation of LB with the creation of a k8s service.
i feel we should have kept them as separate concepts.
Your suggestion would tighten the coupling between LBs and k8 services.

An alternative would be to go down a path of creating a separate object for LBs (Ingress or something else) that can map to a service, instead of tightening this coupling.

@thockin
Copy link
Member Author

thockin commented Dec 9, 2015

We might want to do that (I like the idea, but it's Yet Another Step), but
no matter what we HAVE this coupling right now and it is too opaque.

On Wed, Dec 9, 2015 at 11:27 AM, Abhi Shah notifications@github.com wrote:

We currently have coupled creation of LB with the creation of a k8s
service.
i feel we should have kept them as separate concepts.
Your suggestion would tighten the coupling between LBs and k8 services.

An alternative would be to go down a path of creating a separate object
for LBs (Ingress or something else) that can map to a service, instead of
tightening this coupling.


Reply to this email directly or view it on GitHub
#18451 (comment)
.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

@kubernetes/sig-network-misc

@k8s-ci-robot k8s-ci-robot added the sig/network Categorizes an issue or PR as relevant to SIG Network. label Jun 11, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 11, 2017
@justinsb
Copy link
Member

I'm not sure what more we would want to do. We could include a "provider id" field for the load balancer, but in practice at least on AWS it doesn't gain us a lot - the DNS name itself is a better identifier.

Here is where we are today on AWS:

> k get services -owide
NAME           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)          AGE       SELECTOR
guestbook      100.64.32.128    ad55bee634ee011e7b89102561e08ab5-168463352.us-west-2.elb.amazonaws.com   3000:32664/TCP   4m        app=guestbook
...
> k get service guestbook -oyaml
...
status:
  loadBalancer:
    ingress:
    - hostname: ad55bee634ee011e7b89102561e08ab5-168463352.us-west-2.elb.amazonaws.com

ad55bee634ee011e7b89102561e08ab5 is the name of the AWS load balancer.

> aws elb describe-load-balancers
...
             "DNSName": "abe7f60d14e5611e797370a9aefe8be7-637046823.us-east-1.elb.amazonaws.com",
            "SecurityGroups": [
                "sg-6e76971f"
            ],
            "Policies": {
                "LBCookieStickinessPolicies": [],
                "AppCookieStickinessPolicies": [],
                "OtherPolicies": [
                    "k8s-proxyprotocol-enabled"
                ]
            },
            "LoadBalancerName": "abe7f60d14e5611e797370a9aefe8be7",

@thockin
Copy link
Member Author

thockin commented Jun 11, 2017 via email

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2017
@thockin
Copy link
Member Author

thockin commented Dec 27, 2017

Dup of #52670 which has more details

@thockin thockin closed this as completed Dec 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

7 participants