-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kube-proxy scaling/perf #1277
Comments
Has there been any progress? |
We've had no reports of actual trouble, though there are some obvious On Fri, Oct 10, 2014 at 8:27 AM, Ye Yin notifications@github.com wrote:
|
@thockin No. I just want to know whether kube-proxy better than iptables of Docker, so I am also prepared to test kube-proxy performance. |
Funny, I was just thinking about proxy performance today. As for better or
|
Fwiw, I had setup my web app on Kubernetes this past week. I'm using GCE. I had a load balancer pointed at two Kubernetes Minions with both running HAProxy doing SSL termination. HAProxy then routed requests to web+api services. I switched off it yesterday as loads were somewhat slow (perhaps 100ms less) + I kept getting lots of ERR_CONTENT_LENGTH_MISMATCH errors in Chrome. One page has 10-15 images on it and at least one or two of the images failed to load every time. I switched instead to a single CoreOS box with HAProxy again routing to Docker containers. Both load times are improved and the content length mismatch errors haven't returned. |
Here are some measurements I did today between two hosts (Host A and Host B) with a ~250Mbps network link between them. The machines are baremetal and each run a pod (pod A on host A and pod B on host B). I use flannel for the overlay network. During the whole test, pod B runs a netcat server that accepts connections and discard the output:
Transfer from Pod A to Pod B:
Result (from pod B pov):
Transfer from Host A to Pod B:
Result (from pod B pov):
|
2 things: First, this does not test kube-proxy at all. Second, this is On Mon, Dec 8, 2014 at 2:45 AM, josselin-c notifications@github.com wrote:
|
I should have been clearer, sorry :) Pod-to-pod traffic does not cross the proxy - this is just exercising the The veth throughput is know to be a performance issue - it's one we've been On Mon, Dec 8, 2014 at 9:52 PM, Tim Hockin thockin@google.com wrote:
|
You are right, I should have created a Service to test the proxy code. Should I create an issue for my measurements then? Most veth benchmarks still get results way over what I see here. |
If you think there is a problem, go ahead and open an issue, and we can
|
Also see #3760 for a performance optimization of the kube proxy. |
A benchmark would be useful. I had a chat with a user today about proxy performance problems. |
This is extremely old issue and a lot has been done since then. Can we close it? |
It's okay with me -----Message d'origine----- This is extremely old issue and a lot has been done since then. Can we close it? |
We should measure and (maybe) optimize kube-proxy at some point. There are some fairly easy opportunities to make it more efficient.
The text was updated successfully, but these errors were encountered: