-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCE provider: Create TargetPool with 200 instances, then update with rest #27829
Conversation
This reverts commit faf0c44.
…rest Tested with 2000 nodes, this actually meets the GCE API specifications (which is nutty). Previous PR (kubernetes#25178) was based on a mistaken understanding of a poorly documented set of limitations, and even poorer testing, for which I am embarassed.
GCE e2e build/test passed for commit f63ac19. |
LGTM - thanks! |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit f63ac19. |
Automatic merge from submit-queue |
Manual cherry pick of #27829 on release-1.2
@thockin ... I think I MIGHT be able to get ingress, up. I set affinity rules so that the RC would only be on two nodes. @bprashanth is assisting. Fingers crossed |
@chrislovecnm: We'll also have a v1.2.5 that includes this PR soon. |
v1.2.5 is now out! |
@zmerlynn I am running 1.3 beta 2 and have 1009 nodes ... I am guessing I am going to hit a gce error... |
I thought this made it into |
@zmerlynn ping me on slack .. I am working with @bprashanth to figure out what is going on. May be user error. |
So GCE api is blowing up on me: E0628 02:26:40.427634 7 utils.go:103] Requeuing pet-race-ui/pet-race-ui, err googleapi: Error 403: Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'k8s-ig--c9a6052e1676112c'. Limit: 1000.0, limitExceeded, unable to get loadbalancer: Loadbalancer pet-race-ui-pet-race-ui--c9a6052e1676112c not in pool
I0628 02:26:40.429150 7 event.go:216] Event(api.ObjectReference{Kind:"Ingress", Namespace:"pet-race-ui", Name:"pet-race-ui", UID:"fb01f977-3cd5-11e6-af56-42010a800002", APIVersion:"extensions", ResourceVersion:"57749081", FieldPath:""}): type: 'Warning' reason: 'GCE :Quota' googleapi: Error 403: Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'k8s-ig--c9a6052e1676112c'. Limit: 1000.0, limitExceeded
E0628 02:26:41.827753 7 utils.go:103] Requeuing pet-race-ui/pet-race-ui, err googleapi: Error 403: Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'k8s-ig--c9a6052e1676112c'. Limit: 1000.0, limitExceeded, unable to get loadbalancer: Loadbalancer pet-race-ui-pet-race-ui--c9a6052e1676112c not in pool
I0628 02:26:41.827937 7 event.go:216] Event(api.ObjectReference{Kind:"Ingress", Namespace:"pet-race-ui", Name:"pet-race-ui", UID:"fb01f977-3cd5-11e6-af56-42010a800002", APIVersion:"extensions", ResourceVersion:"57749081", FieldPath:""}): type: 'Warning' reason: 'GCE :Quota' googleapi: Error 403: Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'k8s-ig--c9a6052e1676112c'. Limit: 1000.0, limitExceeded |
That's a different type of lb. |
Hmm, for the record apparently either the fix isn't in beta.2, or Chris is not using beta.2, because he got:
|
I just glanced, and this PR didn't make |
@zmerlynn I think we have a bug as well. Backend services are not being removed, upon deletion. |
Please file an issue with details. This PR has already merged, there's no reason to have a conversation here. |
As soon as I hit enter I realized that. Need sleep ;) |
…5178 Manual cherry pick of kubernetes#27829 on release-1.2
…5178 Manual cherry pick of kubernetes#27829 on release-1.2
GCE provider: Create TargetPool with 200 instances, then update with rest
Tested with 2000 nodes, this actually meets the GCE API specifications (which is nutty). Previous PR (#25178) was based on a mistaken understanding of a poorly documented set of limitations, and even poorer testing, for which I am embarassed.
Also includes the revert of #25178 (review commits separately).