-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Endpoints object in kubemark can only have a single backend! #59823
Comments
/assign I'll take a further look into this. |
I guess I found the reason. There is a validation step in the apiserver for endpoints objects, which checks that an IP entry in the endpoints object cannot have it's nodeName overwritten. And this is precisely what is violated in kubemark (i.e we're trying to override the nodeName for IP). This is because in kubemark, we're always using the same constant IP address for all our fake pods (which is 2.3.4.5) due to fake docker client setting a constant value for it (see - https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockershim/libdocker/fake_client.go#L600). This is causing those clashes. |
Note that this means all this while we haven't actually been populating endpoints objects in kubemark. And this now explains why we've been seeing such a difference in
|
IMO the correct solution for this is to fix our docker-client mock to actually assign different IPs to different containers. And this can be done in at least a couple of ways:
I'll take up the implementation of this fix. cc @kubernetes/sig-node-bugs |
Sent out the above PR implementing the first approach, as I think it is sufficient and the 2nd one would be an overkill. |
@shyamjvs - great debugging! Unfortunately your fix doesn't seem to help. |
Thanks - I'll take a look into it in a moment.
…On Wed, Feb 14, 2018, 8:38 AM Wojciech Tyczynski ***@***.***> wrote:
@shyamjvs <https://github.com/shyamjvs> - great debugging!
Unfortunately your fix doesn't seem to help.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#59823 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEIhkwkJ1Er_LfQqavC2ML5yFFhXVqJwks5tUo2RgaJpZM4SEQEM>
.
|
Seems like it did help a bit, but not completely - #59832 (comment) |
Recently profile collection has been added to our scalability tests. While running the kubemark-500 presubmit, @wojtek-t noticed from the memory allocation profile that an unusually large amount of allocations (~10GB) were being done by
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/errors.aggregate.Error()
during load test. See - https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/11932/artifacts/profiles/ApiserverMemoryProfile_load.pdfAfter he added some logging, I took a look at the apiserver logs and it seems like there are a huge no. of such errors:
On digging up a bit I observed few things:
PUT endpoints
calls made by the ep-controller@kubernetes/sig-scalability-misc @wojtek-t
The text was updated successfully, but these errors were encountered: