-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quota 'SUBNETWORKS' exceeded in e2e tests #46713
Comments
/cc @fejta Could you please assign it to an appropriate person? |
/assign |
the kubemark project looks normal, etcd project is flooded though, does etcd suite using excessive resources? |
/remove-sig testing Looks like we are leaking SUBNETWORKS, see http://prow.k8s.io/?type=presubmit&job=pull-kubernetes-e2e-gce-etcd3 |
I don't think we are leaking - each network create 8 subnets + there's 8 default one, and we are running 12 instances which total subnets will be 13 * 8 = 104 > 100, the 12th job will always fail.. probably just need to bump the quota a little bit more.
|
hummm, I take it back, it should not fail every single PR there.. |
Is there a project with extra subnets? From the list above, it looks like 1/project. Also -- I disabled the only CI job that creates its own subnets directly. |
it's 1 subnet per pr run per zone |
Why are we creating subnets in all zones? |
@bowei
any idea? |
and the ones not affecting by the subnets also failing - seems some node timeout issue? /assign @pwittrock |
I'm going to guess that the new us-west1-c zone that went live yesterday may have pushed this over the edge? |
requested a bump, I'm more worrying that the runs has not hit by subnets issue also failed, that seems a separate issue. |
I also would like to know this. Why does this project need so many subnets? |
So it looks like it's auto-creating subnetworks in all zones because of the If we changed this to Either way, a quota increase would also fix this, it seems. |
quota is bumped, let's see if it fixes things |
kicking off a test to try it: #46711 |
/assign @MrHohn Some network resource are still leaking, I'm manually running janitor to clean them up. Seems PRs are piling up though. |
a sample log that's failing not due to subnetwork quota: |
kubernetes/test-infra#2902 should fix it, but we need to wait for couple runs |
things are start passing, now wait for the backlog to drain.. |
Also to clarify what was going on:
|
whoops I was running the clean script from a different branch... now I'd expect old subnets are all gone from the project, and subsequent runs would be fine. |
seems stable now. |
@krzyzacy: you can't close an issue unless you authored it or you are assigned to it. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
/close |
E.g.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/46700/pull-kubernetes-e2e-gce-etcd3/33214/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/46700/pull-kubernetes-kubemark-e2e-gce/32814/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/46696/pull-kubernetes-kubemark-e2e-gce/32813/
The text was updated successfully, but these errors were encountered: