-
Notifications
You must be signed in to change notification settings - Fork 923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"get ingress" still shows port 80 even though it is HTTPS only. #500
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@ahmetb Can you do a describe on that ingress to get more detail? /kind bug |
@seans3 describe output is already in the original issue posting. |
From the specifications laid out here, this is working as expected since it has a TLS section in the Ingress yaml. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale According to @code-sleuth, this is not a bug. More triage needed. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
No it's not fixed in v1.18.2, the speculations must be incorrect. :) Port 80’s still there with the provided repro. kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
helloweb * 35.241.55.240 80, 443 3m23s
Incorrect. Here's my response body, no port 80
|
Thanks @ahmetb
/assign EDIT: It looks like the ports shown in |
@brianpursley please refer to my initial comment.
Without this annotation you see both 80 & 443. With this annotation apiserver no longer returns 80. So why show “80, 443” regardless was my original issue report. |
Ok, I think I see what you mean. Obviously I’m not too familiar with ingress, but trying to help close out some of these bugs. I’ll see what I can learn about that annotation, and whether it can be used to fix the issue. |
Actually, I am correct about this (at least for recent versions of kubectl), read ahead... When I do
I suspect why you saw something different was because you're using an older version of kubectl. I pulled down kubectl v1.10.2 (what you said you were using in the original issue description) and saw a response body similar to what you posted, an Ingress json, not a Table. I will go ahead and fix this via a PR, but it looks like it is going to be a server-side fix, not something that using a new kubectl will solve. |
I actually posted that response with v1.18.2, so I don't think anything’s wrong my side. :) That's exactly the JSON I got on the response body (plus why would server return a different result for this based on kubectl version?). I think this is still a client-side thing. |
The server doesn't handle specific kubectl versions differently, but kubectl tells the server how it wants the response. For example... Kubectl 1.10.2's request, with
kubectl 1.18.0's request, with
If you're requesting to an older Kubernetes server with a newer kubectl, I'm guessing it will just ignore the So depending on your server version it is either a server-side or client-side thing, but I guess the good news is that a PR for this WILL probably actually solve the issue in your case after all, because it will be able to format the table client-side. |
I ended up closing my PR (kubernetes/kubernetes#90658) because it seems that a more comprehensive approach should be taken to display the ingress ports. Here is a recap of what I learned while working on it: The problem is that
However there are many different ingress controllers, and they each can specify ports in their own way. For example, GKE ingress uses the I think it is safe to say the table printer should not be assuming which ports ingress uses, but should instead get those ports somehow. I'm not sure off-hand from where, so I will leave that as TBD how the ports used by ingress can be obtained. I'm going to unassign myself from this issue (at least for now) so it doesn't discourage someone else from picking this up and working on it if they want to give it a try. /unassign |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Same issue exists in v1.19.0 as well. Im using AWS ALB.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Source code in https://github.com/kubernetes/kubernetes/blob/release-1.20/pkg/printers/internalversion/printers.go#L1193-L1210
Most likely, port 80 for HTTP, port 443 for TLS. I may prefer something like:
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use
kubectl version
): v1.10.2Environment:
I have a HTTPS only ingress that shows port 80 in kubectl get (kubectl v1.10.2):
I think port 80 actually isn't open:
describe output:
My YAML:
This looks like a
kubectl
bug that it just assumes port 80 is open for all ingresses?The text was updated successfully, but these errors were encountered: