-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
immediately close the other side of the connection when proxying #67288
Conversation
/test pull-kubernetes-integration |
/assign @deads2k |
/LGTM |
FYI @mikebrow @Random-Liu Not sure if there is a better node based fix from CRI pushing events to close upwards. Thanks for talking through this with me. |
/kind bug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIce!
nit.. since we have log for creating the connection, maybe we could use log on closing the connection in the defer?
/assign @caesarxuchao |
@mikebrow before and after the wait? One side or the other specifically? |
Well yes, but I was specifically looking at these "Connecting to backend proxy" log entires: https://github.com/kubernetes/kubernetes/pull/67288/files?utf8=✓&diff=unified#diff-16a26bb0d342318c6eb3d03e6c5756e9R257 and thinking we should have a reciprocal closing message. So I suppose after they are closed to balance the "before" it's opened message would be the minimum. And yeah it would also be nice to have a before wait and after wait message. But that might be overkill.. |
/assign @lavalamp |
I have a stupid question, from #57992 (comment),
Why is that? Shouldn't both the reader and writer close on both apiserver and kubelet? |
@caesarxuchao I agree, but what mechanism would do that? Nothing seems to be broken, and my scenario is fixed during my testing. |
- don't wait for the both sides of the connection to close on their own. If one closes there doesn't seem to be any point in continuing the other half. - for port-forward, this prevents the api-server and kubelet both waiting on the other with no result. Normally the connection eventually times out. This cleans up the connection appropriately. - add a log message on closing the connection that corresponds to opening the connection
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: caesarxuchao, duglin, MHBauer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
/test pull-kubernetes-e2e-gce-100-performance |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
So ... Let's say I have this exchange:
Will the connection still be open for the reply? I haven't built it and tried yet, but I gather from the description and the patch that this will kill both directions of the connection when either closes - breaking the above use of not-HTTP-keepalive. Is that correct? Fwiw, in an ssh / |
@MHBauer I thought the io.Copy would only return after after the request connection or the backend connection is closed, in that case, the PR was right. But io.Copy returns at EOF, so @anguslees's comment makes sense. |
@anguslees sort pipe like this? Something else?
|
just re-read and saw the exec. that also works with the busybox pod running with an open sh in another terminal.
|
Do you have a test case? I'm not sure I fully understand. |
Sorry for the delay, it took me a while to find a good test client that actually made a TLS HTTP/1.0 request, with half-close (it appears curl can't be made to do this). I constructed a full test scenario that generated an HTTP/1.0 TLS request with an early close to a /api/.../proxy' URL for a job that introduced a delay before responding ... and found that my nearby k8s 1.8 (old!) test cluster already "failed" my test. Then I went back and re-read the HTTP/1.0 spec and found that this was acceptable behaviour ("the closing of the connection by either or both parties always terminates the current request, regardless of its status.") So: My concern about doing this at the HTTP-level was invalid, and your testing above confirms that there's no problem at the tunneled-over-websocket level. TL;DR: lgtm! And sorry for the distraction. |
@MHBauer I added a release note to this PR because we need one for the cherrypick. please see if that looks good. |
…-#67288-upstream-release-1.9 Automated cherry pick of #67288: Immediately close the other side of the connection when proxying
@anguslees thanks for the followup. @mbohlool release note looks fine. Sorry for my hold up in responding. |
Reading the cherry-pick discussions, this isn't a user facing change. There should be nothing different. It changes how server components interact with each other. |
…-#67288-upstream-release-1.11 Automated cherry pick of #67288: Immediate close the other half of the connection when
…-#67288-upstream-release-1.10 Automated cherry pick of #67288: Immediate close the other half of the connection when
What this PR does / why we need it:
close the other side of the connection by exiting once one side closes. This code is used in both apiserver and kubelet.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #57992
Special notes for your reviewer:
unit tests passed locally. something similar is done in client-go/tools/portforward. doesn't seem to break the tests.
Release note: