-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes-e2e-gce-1.4-1.5-upgrade-cluster-new: broken test run #37732
Comments
Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: UpgradeTest {e2e.go}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 |
Multiple broken tests: Failed: UpgradeTest {e2e.go}
Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 |
Multiple broken tests: Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36242 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: UpgradeTest {e2e.go}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060 Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 |
Multiple broken tests: Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}
Issues about this test specifically: #33887 Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}
Issues about this test specifically: #27295 #35385 #36126 #37452 #37543 Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}
Issues about this test specifically: #36265 #36353 #36628 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
Issues about this test specifically: #27443 #27835 #28900 #32512 Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #33008 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}
Issues about this test specifically: #31151 #35586 Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32936 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}
Issues about this test specifically: #30263 Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}
Issues about this test specifically: #27232 Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}
Issues about this test specifically: #26129 #32341 Failed: UpgradeTest {e2e.go}
Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}
Issues about this test specifically: #37435 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456 Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #35601 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted {Kubernetes e2e suite}
Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}
Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}
Issues about this test specifically: #28067 #28378 #32692 #33256 #34654 Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #33987 Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}
Issues about this test specifically: #29066 #30592 #31065 #33171 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28507 #29315 #35595 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}
Issues about this test specifically: #31085 #34207 #37097 Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774 Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #37529 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}
Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26678 #29318 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}
Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #35793 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29521 Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}
Issues about this test specifically: #37428 Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}
Issues about this test specifically: #29511 #29987 #30238 Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #35473 Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}
Issues about this test specifically: #31635 Failed: [k8s.io] Mesos applies slave attributes as labels {Kubernetes e2e suite}
Issues about this test specifically: #28359 Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36649 Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
Issues about this test specifically: #28337 Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
Issues about this test specifically: #29629 #36270 #37462 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}
Issues about this test specifically: #27976 #29503 Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941 Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32122 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}
Issues about this test specifically: #28371 #29604 #37496 Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
Issues about this test specifically: #31408 Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #37314 Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27079 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}
Issues about this test specifically: #34372 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}
Issues about this test specifically: #27156 #28979 #30489 #33649 Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}
Issues about this test specifically: #31697 #36574 Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}
Issues about this test specifically: #37525 Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
Issues about this test specifically: #32087 Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26194 #26338 #30345 #34571 Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28010 #28427 #33997 Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26164 #26210 #33998 #37158 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27673 Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}
Issues about this test specifically: #32436 #37267 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27014 #27834 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26139 #28342 #28439 #31574 #36576 Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}
Issues about this test specifically: #29647 #35627 Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29461 Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
Issues about this test specifically: #32023 Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}
Issues about this test specifically: #32371 Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28297 #37101 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26728 #28266 #30340 #32405 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}
Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36242 Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}
Issues about this test specifically: #34119 #37176 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28420 #36122 Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28283 Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #37502 Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36300 Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}
Issues about this test specifically: #32467 #36276 Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36183 Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}
Issues about this test specifically: #28348 #36703 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29834 #35757 Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32949 Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}
Issues about this test specifically: #31183 #36182 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26955 Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}
Issues about this test specifically: #29976 #30464 #30687 Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}
Issues about this test specifically: #32668 #35405 Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}
Issues about this test specifically: #27704 #30127 #30602 #31070 #34383 Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26127 #28081 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #33730 #37417 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27532 #34567 Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27507 #28275 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
|
Multiple broken tests: Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26126 #30653 #36408 Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}
Issues about this test specifically: #37436 Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}
Issues about this test specifically: #26324 #27715 #28845 Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29461 Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}
Issues about this test specifically: #29511 #29987 #30238 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
Issues about this test specifically: #27443 #27835 #28900 #32512 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}
Issues about this test specifically: #36271 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}
Issues about this test specifically: #27156 #28979 #30489 #33649 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}
Issues about this test specifically: #34064 Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}
Issues about this test specifically: #32668 #35405 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}
Issues about this test specifically: #27976 #29503 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26139 #28342 #28439 #31574 #36576 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36706 Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}
Issues about this test specifically: #29657 Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36242 Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #36564 Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}
Issues about this test specifically: #28003 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26171 #28188 Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28084 Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #33985 Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}
Issues about this test specifically: #30216 #31031 #32086 Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}
Issues about this test specifically: #31075 #36286 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}
Issues about this test specifically: #30263 Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26780 Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #34827 Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}
Issues about this test specifically: #27704 #30127 #30602 #31070 #34383 Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #34226 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27245 Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}
Issues about this test specifically: #29040 #35756 Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}
Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30264 Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30981 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}
Issues about this test specifically: #34317 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}
Issues about this test specifically: #26172 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}
Issues about this test specifically: #35579 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}
Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32936 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}
Issues about this test specifically: #34212 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
Issues about this test specifically: #37274 Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}
Issues about this test specifically: #28067 #28378 #32692 #33256 #34654 Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}
Issues about this test specifically: #32054 #36010 Failed: [k8s.io] Mesos applies slave attributes as labels {Kubernetes e2e suite}
Issues about this test specifically: #28359 Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30632 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}
Issues about this test specifically: #31183 #36182 Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #31836 Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}
Issues about this test specifically: #27295 #35385 #36126 #37452 #37543 Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30851 Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #35422 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26678 #29318 Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
Issues about this test specifically: #29629 #36270 #37462 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}
Issues about this test specifically: #34367 Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}
Issues about this test specifically: #28339 #36379 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}
Issues about this test specifically: #37525 Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30287 #35953 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26955 Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26164 #26210 #33998 #37158 Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28503 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #33631 #33995 #34970 Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #34520 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #33285 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}
Issues about this test specifically: #26509 #26834 #29780 #35355 Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}
Issues about this test specifically: #29513 Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26870 #36429 Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #37529 Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26168 #27450 Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}
Issues about this test specifically: #37428 Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26127 #28081 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28493 #29964 Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}
Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}
Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}
Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27360 #28096 #29615 #31775 #35750 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26138 #28429 #28737 Failed: UpgradeTest {e2e.go}
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}
Issues about this test specifically: #34250 Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}
Issues about this test specifically: #29976 #30464 #30687 Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}
Issues about this test specifically: #33887 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}
Issues about this test specifically: #34372 Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #37314 Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}
Issues about this test specifically: #31889 #36293 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Stateful Set recreate [Slow] should recreate evicted statefulset {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}
Issues about this test specifically: #34687 Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774 Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e |
Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: UpgradeTest {e2e.go}
Issues about this test specifically: #37745 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37774 Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 |
Multiple broken tests: Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #35601 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37774 Failed: UpgradeTest {e2e.go}
Issues about this test specifically: #37745 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 |
Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: UpgradeTest {e2e.go}
Issues about this test specifically: #37745 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}
Issues about this test specifically: #37435 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37774 Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: DiffResources {e2e.go}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-cluster-new/60/
Multiple broken tests:
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642
Failed: UpgradeTest {e2e.go}
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 #37620
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878
Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929
Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 #34060
Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}
Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871
The text was updated successfully, but these errors were encountered: