Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"get ingress" still shows port 80 even though it is HTTPS only. #500

Closed
ahmetb opened this issue Jun 6, 2018 · 40 comments
Closed

"get ingress" still shows port 80 even though it is HTTPS only. #500

ahmetb opened this issue Jun 6, 2018 · 40 comments
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2 sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@ahmetb
Copy link
Member

ahmetb commented Jun 6, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use kubectl version): v1.10.2
Environment:

  • Cloud provider or hardware configuration: GKE

I have a HTTPS only ingress that shows port 80 in kubectl get (kubectl v1.10.2):

# kubectl get ing
NAME       HOSTS     ADDRESS          PORTS     AGE
helloweb   *         35.227.248.205   80, 443   4m

I think port 80 actually isn't open:

image

describe output:

Name:             helloweb
Namespace:        default
Address:          35.227.248.205
Default backend:  helloweb-backend:443 (10.32.4.7:8443)
TLS:
  yourdomain-tls terminates
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     *     helloweb-backend:443 (10.32.4.7:8443)
Annotations:
  ingress.kubernetes.io/backends:                    {"k8s-be-30921--906590fce6179ef9":"Unknown"}
  ingress.kubernetes.io/https-forwarding-rule:       k8s-fws-default-helloweb--906590fce6179ef9
  ingress.kubernetes.io/https-target-proxy:          k8s-tps-default-helloweb--906590fce6179ef9
  ingress.kubernetes.io/ssl-cert:                    k8s-ssl-09cce9ee44983641-6d1404f893135fac--906590fce6179ef9
  ingress.kubernetes.io/url-map:                     k8s-um-default-helloweb--906590fce6179ef9
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.allow-http":"false"},"labels":{"app":"hello"},"name":"helloweb","namespace":"default"},"spec":{"backend":{"serviceName":"helloweb-backend","servicePort":443},"tls":[{"secretName":"yourdomain-tls"}]}}

  kubernetes.io/ingress.allow-http:  false
Events:
  Type    Reason   Age              From                     Message
  ----    ------   ----             ----                     -------
  Normal  ADD      3m               loadbalancer-controller  default/helloweb
  Normal  CREATE   1m               loadbalancer-controller  ip: 35.227.248.205
  Normal  Service  1m (x3 over 1m)  loadbalancer-controller  default backend set to helloweb-backend:30921

My YAML:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: helloweb
  labels:
    app: hello
  annotations:
    kubernetes.io/ingress.allow-http: "false" # disable HTTP
spec:
  tls:
    - secretName: yourdomain-tls
  backend:
    serviceName: helloweb-backend
    servicePort: 443

This looks like a kubectl bug that it just assumes port 80 is open for all ingresses?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 4, 2018
@ahmetb
Copy link
Member Author

ahmetb commented Sep 5, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2018
@seans3
Copy link
Contributor

seans3 commented Sep 25, 2018

@ahmetb Can you do a describe on that ingress to get more detail?

/kind bug
/sig cli
/area kubectl
/priority P2

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. area/kubectl priority/P2 labels Sep 25, 2018
@ahmetb
Copy link
Member Author

ahmetb commented Sep 25, 2018

@seans3 describe output is already in the original issue posting.

@code-sleuth
Copy link

From the specifications laid out here, this is working as expected since it has a TLS section in the Ingress yaml.
This should be closed as not a bug
cc: @juanvallejo @soltysh @seans3

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2019
@seans3
Copy link
Contributor

seans3 commented Jan 23, 2019

/remove-lifecycle stale

According to @code-sleuth, this is not a bug. More triage needed.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@seans3
Copy link
Contributor

seans3 commented Apr 29, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 27, 2019
@seans3
Copy link
Contributor

seans3 commented Aug 28, 2019

/remove-lifecycle rotten

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ahmetb
Copy link
Member Author

ahmetb commented Apr 29, 2020

No it's not fixed in v1.18.2, the speculations must be incorrect. :) Port 80’s still there with the provided repro.

kubectl get ingress
NAME       HOSTS   ADDRESS         PORTS     AGE
helloweb   *       35.241.55.240   80, 443   3m23s

Kubectl isn't deciding what data to display, it is just showing what comes back from the API. If you do a kubectl get ingress -v9 you can see what comes back from the API.

Incorrect. Here's my response body, no port 80

{"kind":"Ingress","apiVersion":"extensions/v1beta1","metadata":{"name":"helloweb","namespace":"default","selfLink":"/apis/extensions/v1beta1/namespaces/default/ingresses/helloweb","uid":"57126f07-2d9f-4d8a-9bd4-a2a9679fa285","resourceVersion":"101690245","generation":4,"creationTimestamp":"2020-04-29T16:56:32Z","labels":{"app":"hello"},"annotations":{"ingress.kubernetes.io/backends":"{\"k8s-be-31964--83021845813e7945\":\"Unknown\"}","ingress.kubernetes.io/https-forwarding-rule":"k8s-fws-default-helloweb--83021845813e7945","ingress.kubernetes.io/https-target-proxy":"k8s-tps-default-helloweb--83021845813e7945","ingress.kubernetes.io/ssl-cert":"k8s-ssl-ad62a64410ec665a-06d6c289c9f90a6e--83021845813e7945","ingress.kubernetes.io/url-map":"k8s-um-default-helloweb--83021845813e7945","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"kubernetes.io/ingress.allow-http\":\"false\"},\"labels\":{\"app\":\"hello\"},\"name\":\"helloweb\",\"namespace\":\"default\"},\"spec\":{\"backend\":{\"serviceName\":\"helloweb-backend\",\"servicePort\":443},\"tls\":[{\"secretName\":\"yourdomain-tls\"}]}}\n","kubernetes.io/ingress.allow-http":"false"},"finalizers":["networking.gke.io/ingress-finalizer"]},"spec":{"backend":{"serviceName":"hello","servicePort":443},"tls":[{"secretName":"yourdomain-tls"}]},"status":{"loadBalancer":{"ingress":[{"ip":"35.241.55.240"}]}}}

@brianpursley
Copy link
Member

brianpursley commented Apr 29, 2020

Thanks @ahmetb
I took another look and think I found where it is happening:
https://github.com/kubernetes/kubernetes/blob/a26c34e47007dfa26378a7fac5296763df476d11/pkg/printers/internalversion/printers.go#L1157-L1162

It assumes if you have TLS, it is port 443 and 80, otherwise 80. None of those are probably good assumptions to make.

I'll see if I can change the printer to pull the ports from the JSON...

/assign

EDIT: It looks like the ports shown in kubectl get ingress are actually the frontend ports, so it would not make sense to pull the ports out of the json

@brianpursley
Copy link
Member

brianpursley commented Apr 29, 2020

I'm thinking the problem is ingress doesn't know what ports the load balancer is listening on.

When I create an ingress in GKE and select HTTPS, it gives me both 80 and 443 (see below):
ingress

But then I can go make changes to the load balancer to remove HTTP if I want, and ingress doesn't know about that.

Maybe ports should not even be shown in the kubectl get ingress output. What are you expecting to see as the value for ports?

Maybe it should just be TLS Y/N instead, something like this:

# kubectl get ing
NAME       HOSTS     ADDRESS          TLS     AGE
helloweb   *         35.227.248.205   Y       4m

Or

# kubectl get ing
NAME       HOSTS     ADDRESS          TLS                  AGE
helloweb   *         35.227.248.205   yourdomain-tls       4m

@ahmetb
Copy link
Member Author

ahmetb commented Apr 30, 2020

@brianpursley please refer to my initial comment.

annotations:
  kubernetes.io/ingress.allow-http: "false" # disable HTTP

Without this annotation you see both 80 & 443. With this annotation apiserver no longer returns 80. So why show “80, 443” regardless was my original issue report.

@brianpursley
Copy link
Member

Ok, I think I see what you mean. Obviously I’m not too familiar with ingress, but trying to help close out some of these bugs. I’ll see what I can learn about that annotation, and whether it can be used to fix the issue.

@brianpursley
Copy link
Member

brianpursley commented Apr 30, 2020

Kubectl isn't deciding what data to display, it is just showing what comes back from the API. If you do a kubectl get ingress -v9 you can see what comes back from the API.

Incorrect. Here's my response body, no port 80

{"kind":"Ingress","apiVersion":"extensions/v1beta1","metadata":{"name":"helloweb","namespace":"default","selfLink":"/apis/extensions/v1beta1/namespaces/default/ingresses/helloweb","uid":"57126f07-2d9f-4d8a-9bd4-a2a9679fa285","resourceVersion":"101690245","generation":4,"creationTimestamp":"2020-04-29T16:56:32Z","labels":{"app":"hello"},"annotations":{"ingress.kubernetes.io/backends":"{\"k8s-be-31964--83021845813e7945\":\"Unknown\"}","ingress.kubernetes.io/https-forwarding-rule":"k8s-fws-default-helloweb--83021845813e7945","ingress.kubernetes.io/https-target-proxy":"k8s-tps-default-helloweb--83021845813e7945","ingress.kubernetes.io/ssl-cert":"k8s-ssl-ad62a64410ec665a-06d6c289c9f90a6e--83021845813e7945","ingress.kubernetes.io/url-map":"k8s-um-default-helloweb--83021845813e7945","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"kubernetes.io/ingress.allow-http\":\"false\"},\"labels\":{\"app\":\"hello\"},\"name\":\"helloweb\",\"namespace\":\"default\"},\"spec\":{\"backend\":{\"serviceName\":\"helloweb-backend\",\"servicePort\":443},\"tls\":[{\"secretName\":\"yourdomain-tls\"}]}}\n","kubernetes.io/ingress.allow-http":"false"},"finalizers":["networking.gke.io/ingress-finalizer"]},"spec":{"backend":{"serviceName":"hello","servicePort":443},"tls":[{"secretName":"yourdomain-tls"}]},"status":{"loadBalancer":{"ingress":[{"ip":"35.241.55.240"}]}}}

Actually, I am correct about this (at least for recent versions of kubectl), read ahead...

When I do kubectl get ingress test-ingress2 -v9, the response body that comes back from the server is a Table, not an Ingress, and the table contains the ports 80, 443:

I0430 16:52:26.158272   30318 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/apis/extensions/v1beta1/namespaces/default/ingresses/test-ingress2","resourceVersion":"4801"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Hosts","type":"string","format":"","description":"Hosts that incoming requests are matched against before the ingress rule","priority":0},{"name":"Address","type":"string","format":"","description":"Address is a list containing ingress points for the load-balancer","priority":0},{"name":"Ports","type":"string","format":"","description":"Ports of TLS configurations that open","priority":0},{"name":"Age","type":"string","format":"","description":"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata","priority":0}],"rows":[{"cells":["test-ingress2","*","","80, 443","7m50s"],"object":{"kind":"PartialObjectMetadata","apiVersion":"meta.k8s.io/v1","metadata":{"name":"test-ingress2","namespace":"default","selfLink":"/apis/extensions/v1beta1/namespaces/default/ingresses/test-ingress2","uid":"b7a50544-7588-465c-96ed-95a2b8c59ea7","resourceVersion":"4801","generation":1,"creationTimestamp":"2020-04-30T20:44:36Z","annotations":{"kubernetes.io/ingress.allow-http":"false"},"finalizers":["networking.gke.io/ingress-finalizer"]}}}]}

I suspect why you saw something different was because you're using an older version of kubectl. I pulled down kubectl v1.10.2 (what you said you were using in the original issue description) and saw a response body similar to what you posted, an Ingress json, not a Table.

I will go ahead and fix this via a PR, but it looks like it is going to be a server-side fix, not something that using a new kubectl will solve.

@ahmetb
Copy link
Member Author

ahmetb commented Apr 30, 2020

I actually posted that response with v1.18.2, so I don't think anything’s wrong my side. :) That's exactly the JSON I got on the response body (plus why would server return a different result for this based on kubectl version?). I think this is still a client-side thing.

@brianpursley
Copy link
Member

I actually posted that response with v1.18.2, so I don't think anything’s wrong my side. :) That's exactly the JSON I got on the response body (plus why would server return a different result for this based on kubectl version?). I think this is still a client-side thing.

The server doesn't handle specific kubectl versions differently, but kubectl tells the server how it wants the response. For example...

Kubectl 1.10.2's request, with Accept: application/json returns Ingress JSON

I0430 18:19:18.452490   32374 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json" -H "User-Agent: kubectl/v1.10.2 (linux/amd64) kubernetes/81753b1" https://104.198.191.92/apis/extensions/v1beta1/namespaces/default/ingresses/test-ingress2

kubectl 1.18.0's request, with Accept: application/json;as=Table;... returns Table JSON

I0430 18:20:40.808104   32409 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.18.0 (linux/amd64) kubernetes/9e99141" 'https://104.198.191.92/apis/extensions/v1beta1/namespaces/default/ingresses/test-ingress2'

If you're requesting to an older Kubernetes server with a newer kubectl, I'm guessing it will just ignore the as=Table... part of the accept header because it doesn't know about that yet. So that my guess as to why it is returning Ingress JSON to you and Table JSON to me, even with the same kubectl version.

So depending on your server version it is either a server-side or client-side thing, but I guess the good news is that a PR for this WILL probably actually solve the issue in your case after all, because it will be able to format the table client-side.

@brianpursley
Copy link
Member

brianpursley commented Jul 12, 2020

I ended up closing my PR (kubernetes/kubernetes#90658) because it seems that a more comprehensive approach should be taken to display the ingress ports.

Here is a recap of what I learned while working on it:

The problem is that formatPorts() assumes either 80 or 80,443 and nothing else. If you have HTTPS only, it still says 80,443 because the table printer has that hard-coded as the output when tls = true.

func formatPorts(tls []networking.IngressTLS) string {
	if len(tls) != 0 {
		return "80, 443"
	}
	return "80"
}

However there are many different ingress controllers, and they each can specify ports in their own way.

For example, GKE ingress uses the kubernetes.io/ingress.allow-http annotation to disable http, and only uses ports 80 and/or 443.

But AWS alb ingress uses the alb.ingress.kubernetes.io/listen-ports annotation to specify any number of ports for HTTP and HTTPS.

I think it is safe to say the table printer should not be assuming which ports ingress uses, but should instead get those ports somehow. I'm not sure off-hand from where, so I will leave that as TBD how the ports used by ingress can be obtained.

I'm going to unassign myself from this issue (at least for now) so it doesn't discourage someone else from picking this up and working on it if they want to give it a try.

/unassign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2020
@brianpursley
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 11, 2020
@prakash-26790
Copy link

Same issue exists in v1.19.0 as well. Im using AWS ALB.

kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
app * abc.elb.amazonaws.com 80 15h

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2021
@pacoxu
Copy link
Member

pacoxu commented Mar 2, 2021

Ports here only means if there is TLS configurations.

  • if it has TLSs, it return 80, 443
  • if it doesn't has TLSs, it return 80

Source code in https://github.com/kubernetes/kubernetes/blob/release-1.20/pkg/printers/internalversion/printers.go#L1193-L1210

{Name: "Ports", Type: "string", Description: "Ports of TLS configurations that open"},

func formatPorts(tls []networking.IngressTLS) string {
	if len(tls) != 0 {
		return "80, 443"
	}
	return "80"
}

Most likely, port 80 for HTTP, port 443 for TLS. I may prefer something like:

// port 80 for HTTP, port 443 for TLS
func formatPorts(tls []networking.IngressTLS) string {
	if len(tls) != 0 {
		return "443/TLS"
	}
	return "80/HTTP"
}

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 1, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2 sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
10 participants