-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WRKLDS-1449: Rebase 1.31.1 #2092
WRKLDS-1449: Rebase 1.31.1 #2092
Conversation
…tesClass Feature (kubernetes#126166) * Add labels to PVCollector bound/unbound PVC metrics * fixup! Add labels to PVCollector bound/unbound PVC metrics * wip: Fix 'Unknown Decorator' * fixup! Add labels to PVCollector bound/unbound PVC metrics
…sable disable ProcMountType by default
…very Reduce state changes when expansion fails and mark certain failures as infeasible
[KEP-3751] Promote VolumeAttributesClass to beta
… scheduler and controller manager. Signed-off-by: Siyuan Zhang <sizhang@google.com>
Co-authored-by: Kevin Klues <klueska@gmail.com>
test/e2e/windows: drop securityContext test for ProcMount
[KEP-4639] Mention that `fsGroupChangePolicy` has no effect
…s-instead-of-casting Job: Use type parameters instead of type casting for the ptr libraries
Terminate restartable init containers ignoring not-started containers
* automatically escape reserved keywords for direct usage * Add reserved keyword support in a ratcheting way, add tests. --------- Co-authored-by: Wenxue Zhao <ballista01@outlook.com>
Update AppArmor e2e tests to use both containers[*].securityContext.appArmorProfile field and annotations.
…estarting-the-kubelet Add node serial e2e tests that simulate the kubelet restart
add(scheduler/framework): implement smaller Pod update events
…checkpoint-upstream DRA: refactor checkpointing
Add KUBE_EMULATED_VERSION env variable to set the emulated-version of scheduler and controller manager.
Update with stdlib errors
…nFailures Implement resource health in pod status (KEP 4680)
e2e test for No SNAT
…rt-notReady fix node notReady in first sync period after kubelet restart
The following tests are failing right now: - ci-kubernetes-e2e-ec2-alpha-enabled-default - ci-kubernetes-e2e-gci-gce-alpha-enabled-default Because of: ``` goroutine 347 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x33092b0, 0x4d6ed00}, {0x296a7e0, 0x4c20c10}) k8s.io/apimachinery/pkg/util/runtime/runtime.go:107 +0xbc k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x33092b0, 0x4d6ed00}, {0x296a7e0, 0x4c20c10}, {0x4d6ed00, 0x0, 0x1000000004400a5?}) k8s.io/apimachinery/pkg/util/runtime/runtime.go:82 +0x5e k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000517be8?}) k8s.io/apimachinery/pkg/util/runtime/runtime.go:59 +0x108 panic({0x296a7e0?, 0x4c20c10?}) runtime/panic.go:770 +0x132 k8s.io/kubernetes/pkg/volume/image.(*imagePlugin).CanSupport(0xc00183d140?, 0xc0006a2600?) k8s.io/kubernetes/pkg/volume/image/image.go:52 +0x3 k8s.io/kubernetes/pkg/volume.(*VolumePluginMgr).FindPluginBySpec(0xc0008a1388, 0xc000f7ddb8) k8s.io/kubernetes/pkg/volume/plugins.go:637 +0x208 k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*desiredStateOfWorld).AddPodToVolume(0xc000517bc0, {0xc000e94a50, 0x24}, 0xc00172b208, 0xc000f7ddb8, {0xc0017892a0, 0xe}, {0xc000a4d6ec, 0x3}, {0xc000978af0, ...}) k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/desired_state_of_world.go:270 +0xf2 k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).processPodVolumes(0xc0003e6700, 0xc00172b208, 0xc00183ddd8) k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:319 +0x685 k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).findAndAddNewPods(0xc0003e6700) k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:204 +0x2dc k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop(0xc0003e6700) k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:173 +0x18 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000905eb0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x33 k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00183df70, {0x32d7340, 0xc000a7be60}, 0x1, 0xc0000b2660) k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xaf k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000f8bf70, 0x5f5e100, 0x0, 0x1, 0xc0000b2660) k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x7f k8s.io/apimachinery/pkg/util/wait.Until(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:161 k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).Run(0xc0003e6700, {0x32e3228, 0xc000b3faa0}, 0xc0000b2660) k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:158 +0x1a5 created by k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run in goroutine 335 k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go:286 +0x14f ``` Fixes kubernetes#126317 Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
… under non-kube-proxy If the cluster is using a non-kube-proxy service proxy, the `curl` will presumably fail; this should not be considered a hard failure.
…stConsistentReadFallback when ResilientWatchCacheInitialization is off
…herDontAcceptRequestsStopped when ResilientWatchCacheInitialization is off
…n < RequiredResourceVersion
a3f8df7
to
65e0eba
Compare
/retest |
@atiratree: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/remove-label backports/unvalidated-commits
/label backports/validated-commits
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: atiratree, bertinatto The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[ART PR BUILD NOTIFIER] Distgit: openshift-enterprise-pod |
[ART PR BUILD NOTIFIER] Distgit: openshift-enterprise-hyperkube |
[ART PR BUILD NOTIFIER] Distgit: ose-installer-kube-apiserver-artifacts |
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: