-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FG:InPlacePodVerticalScaling] pull-kubernetes-e2e-capz-windows-master test fail with InPlacePodVerticalScaling Beta #128897
Comments
|
I think this ticket is for investigating why inplace broke this job. /triage accepted This was the main reason why I was +1 on the revert as I think we had a decent understanding of the test problem but this one is perplexing to me. |
/sig node windows |
Correct , the kubetest2 failure discussed here is also the other reason . We need to solve both those problems to put back InPlacePodVerticalScaling beta . I strongly think kubetest2 relate with test code . /cc @tallclair @AnishShah |
This run from https://prow.k8s.io/pr-history/?org=kubernetes&repo=kubernetes&pr=128880 passes without the revert https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128880/pull-kubernetes-e2e-capz-windows-master/1859388054804893696 |
|
I do some tests on Windows via #128876 and #128927. Create a pod with a configmap volume and check the permission of the volume. It shows that the permission of the volume is correct. the expected permission is
It shows that the permission of the volume is incorrect. the expected permission is
We can see the difference on access control list. I didn't know the reason why other tests affected the directory permission. #128880 may not fix the root cause of the issue. |
Thanks @carlory , comparing runs from your tests, i am a bit puzzled now. Checking your PR runs without IPPVS and PR runs with IPPVS . If i understand correctly , you don't have successful runs with IPPVs, but in #12880 we had some green runs , does this mean that it might not fix the root cause , but could it be that reduces the flakiness?. Does this mean that InPlacePodVerticalScaling is not the root cause but InPlacePodVerticalScaling increases the number of flake tests ? . |
@esotsal Yes, it is. #128880 only changes the test code, not the feature itself. Users may encounter the same issue when their clusters enable IPPVs. To be honest, I'm very confused about why #128880 reduces the flakiness or resolves this issue, but the fact is that it does because the ci is green again. The |
Though I'm not sure this is related to the failures, there are the following messages in kubelet.log in failed cases:
It looks that kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go Lines 676 to 684 in 35d098a
There may be something wrong with this feature regarding Windows. |
Thanks @hshiina , definitely this is not correct, windows are not supported for InPlacePodVerticalScaling ! Perhaps #128936 might fix this ? /cc @tallclair @vinaykul https://github.com/kubernetes/kubernetes/pull/128936/files You have just aswered the question of this issue @hshiina , thanks! |
Should we increase priority to critical-urgent @pacoxu ? Even if InPlacePodVerticalScaling is not Beta , if end user enables InPlacePodVerticalScaling it will have the same consequences on a Windows system. |
Given that the feature is alpha, we don't cherry-pick bugs for alpha features so I don't see the need at that moment. /priority important-soon This means we want this for 1.33 which I think is what we want. |
Thanks for clarifying! |
Is this possible to be related to #129083?
|
Confirmed. The issue is not related. Current master + enabling VPA beta will fail for https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128040/pull-kubernetes-e2e-capz-windows-master/1867106924625924096. |
I think #129214 should address this. |
/reopen due to #129217 |
@AnishShah: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which jobs are failing?
pull-kubernetes-e2e-capz-windows-master
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128880/pull-kubernetes-e2e-capz-windows-master/1859240005449289728
Which tests are failing?
Kubernetes e2e suite: [It] [sig-storage]
-Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
-ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] expand_more 23s
-Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] expand_more 33s
-Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] expand_more 25s
Kubernetes e2e suite: [It] [sig-windows] [Feature:Windows]
Kubernetes e2e suite: [It] [sig-node]
Since when has it been failing?
Last successful 12 November 20:20 1856416666905219072
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-capz-master-windows/1856416666905219072
First failed 12 November 23:20 1856461965182898176
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-capz-master-windows/1856461965182898176
Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
Reason for failure (if possible)
No response
Anything else we need to know?
No response
Relevant SIG(s)
/sig-node /sig-storage /sig-windows
The text was updated successfully, but these errors were encountered: