-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iptables kube-proxy could handle UDP backend changes better #19029
Comments
With a UDP "connection" you can be sending packets to the old IP to your Take the proxy out of the picture: If you net.DialUDP to a non-existent I guess we could use On Tue, Dec 22, 2015 at 1:32 PM, Joshua Kwan notifications@github.com
|
Yeah, this is basically my suggestion here. With the userspace kube-proxy in this scenario, as long as the proxy itself stays up, the endpoints can rotate out without affecting reachability. With iptables, when an endpoint rotates out, the socket will continue connecting to a dead endpoint. So that's a concrete downside of iptables mode that is pretty hard to figure out without a pretty involved debug session like the one i did for the OP, thus I think kube-proxy should try to do something about it. |
Renaming to better reflect issue |
Running into this issue with pretty serious consequences for DNS resolution. Some nodes in our cluster needed a reboot to update to a newer CoreOS version. One of the nodes was running a DNS pod. To minimize the effect on services, I first scaled the DNS replication controller to two instances. Then I rebooted the nodes one by one. After the update, I noticed that some services on nodes that didn't need a reboot had trouble resolving the new addresses of services on rebooted nodes. The problem appeared to be related to some stale connection tracking state on the node that routed DNS packets to the wrong/old DNS pod IP. The result was that services couldn't find the new addresses of pods on rebooted nodes since they were still querying an old DNS pod IP. |
Yeah. I'd love to get a patch to handle this. I'm personally buried right now and there's no way I will get to this in the immediate future. This is a great community project - someone out there must be interested in networking stuff and wants to contribute.... I'll also tag @freehan in case he has cycles, but this is not as high prio as the myriad other things I know you have going on, too. |
I have cycles. I can take a look. |
Minhan, is this something you're still hoping to look at, or overflowed? On Mon, Feb 29, 2016 at 11:03 PM, Minhan Xia notifications@github.com
|
I will submit PR shortly. |
oh, fantastic. Way better answer than I expected. On Thu, Mar 3, 2016 at 5:01 PM, Minhan Xia notifications@github.com wrote:
|
Automatic merge from submit-queue Flush conntrack state for removed/changed UDP Services fixes: #19029
Still happens to me, at least in Node.js. Each time I recreate PODs which are part of UDP service I also have to restart the Node.js PODs. Using |
@shamil Can you please open a new issue and post a repro case, as simple as you can make it. Thanks @girishkalele @kubernetes/sig-network |
@thockin @shamil |
I encountered a strange issue after opting-in to kube-proxy iptables support.
Steps to repro (I think):
The only solution i can see is that kube-proxy, when it rewrites iptables rules based on endpoints changes, needs to reset connections between local sockets and destroyed endpoints.
This didn't happen with the userspace kube-proxy, because kube-proxy was accepting the packets locally regardless of the endpoints, and would always use the latest endpoints information to forward the packet on.
Sorry for the long bug report, but I think it should be pretty clear by now if you've made it here. :)
The text was updated successfully, but these errors were encountered: