-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "[FG:InPlacePodVerticalScaling] Graduate to Beta" #128875
Revert "[FG:InPlacePodVerticalScaling] Graduate to Beta" #128875
Conversation
/test pull-kubernetes-node-kubelet-serial-containerd |
/test pull-kubernetes-e2e-capz-windows-master |
/kind bug |
@cpanato: The provided milestone is not valid for this repository. Milestones in this repository: [ Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/milestone v1.32 |
/triage accepted |
/lgtm |
Collecting issues to resolve before re-enabling this:
|
@aojea thanks for listing those! let's watch the upcoming runs and see what else is hiding behind this one. 🤞🏾 (I've kicked off a few runs) |
Was InPlacePodVerticalScaling Beta the issue also for the Windows CI pipeline failure? Checking https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master , why with same commit suddenly windows pipeline failed? ( last successfull / first failed ) |
I think @liggitt it is worth adding in the list this analysis , i believe explains lot of failures seen in the pipelines ( related with tests , not InPlacePodVerticalScaling feature as is). |
Answering to my self :-) , yes but not the feature it self, the tests. fyi this run from https://prow.k8s.io/pr-history/?org=kubernetes&repo=kubernetes&pr=128880 passes without the revert https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128880/pull-kubernetes-e2e-capz-windows-master/1859388054804893696 . Retriggered to double confirm this. Seems the kubetest2 failure is the last of the three failures where we still have problems. |
See https://github.com/kubernetes/kubernetes/pull/128880/files#r1850861491 and the PR for more information. The CI failure seems to be caused by the e2e pod cleanup logic, but not the feature. ┓( ´∀` )┏ |
@pacoxu unfortunately PR landed E_TOO_LATE_IN_THE_CYCLE for us to dig in like this.. so let's do this one right in 1.33. |
One more update, beside the clean up , after InPlacePodVerticalScaling seems DeleteSync used timeout on some tests was not sufficient
above timeout resulted in failures in kubetest2 pipeline. Same commit shared by pacoxy, tries to test this theory, to see behaviour using e2epod.DefaultPodDeletionTimeout ( which is 3 minutes ) instead. |
I agree it is unfortunate that it hasn't made it, to me seems issues were not rooted because of InPlacePodVerticalScaling to be honest. I wish had looked testgrid earlier, but seems we are close fixing those. |
Agree @esotsal. Good news is that there isn't anything other breakage hiding behind this .. the bleeding has stopped https://storage.googleapis.com/k8s-triage/index.html?job=.*Serial.* |
Thanks all for holding the high quality bar. I'm disappointed that InPlacePodVerticalScaling won't make it in the v1.32 release, but I'd much rather a smooth rollout! |
Reverts #128682
https://storage.googleapis.com/k8s-triage/index.html?test=%5C%5Bsig-node%5C%5D.*Serial
Fixes #128783 #128874