Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gci-gce-serial: broken test run #34655

Closed
k8s-github-robot opened this issue Oct 12, 2016 · 70 comments
Closed

kubernetes-e2e-gci-gce-serial: broken test run #34655

k8s-github-robot opened this issue Oct 12, 2016 · 70 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/143/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4t4v\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ovdo\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yq6t\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4t4v" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ovdo" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yq6t" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4t4v\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ovdo\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yq6t\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-4t4v" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ovdo" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yq6t" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4t4v\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ovdo\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yq6t\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4t4v" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ovdo" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yq6t" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Previous issues for this suite: #33480 #33524 #33647

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Oct 12, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/144/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t7zb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-7fyy\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmp2\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-t7zb" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-7fyy" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmp2" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-7fyy\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmp2\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t7zb\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-7fyy" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmp2" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-t7zb" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t7zb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-7fyy\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmp2\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-t7zb" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-7fyy" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmp2" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/145/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-cpga\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-1z8p\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9bze\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-cpga" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-1z8p" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-9bze" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9bze\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-cpga\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-hd7m\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-9bze" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-cpga" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-hd7m" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-cpga\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-hd7m\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9bze\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-cpga" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-hd7m" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-9bze" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 13, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/146/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bl8h\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ctf3\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-fk5f\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bl8h" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ctf3" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-fk5f" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ctf3\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-fk5f\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ii2p\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ctf3" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-fk5f" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ii2p" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4xor\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ctf3\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-fk5f\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4xor" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ctf3" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-fk5f" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 13, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/147/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-aall\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-jgpw\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-wkh0\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-aall" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-jgpw" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-wkh0" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-aall\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-jgpw\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-wkh0\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-aall" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-jgpw" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-wkh0" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-jgpw\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-wkh0\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-aall\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-jgpw" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-wkh0" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-aall" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/148/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-sc48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w4nb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-jvfq\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-sc48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w4nb" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-jvfq" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lydm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-sc48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w4nb\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lydm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-sc48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w4nb" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lydm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-sc48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w4nb\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lydm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-sc48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w4nb" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/149/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-827n\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-q6e8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x0bz\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-827n" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-q6e8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x0bz" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-827n\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-q6e8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x0bz\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-827n" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-q6e8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x0bz" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-q6e8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x0bz\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-827n\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-q6e8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x0bz" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-827n" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/150/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-gapx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t4ot\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yviu\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-gapx" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-t4ot" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yviu" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t4ot\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yviu\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-gapx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-t4ot" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yviu" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-gapx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-gapx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-t4ot\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yviu\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-gapx" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-t4ot" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yviu" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/151/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l4s7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l7fm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lz8b\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l4s7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l7fm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lz8b" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l4s7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l7fm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lz8b\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l4s7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l7fm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lz8b" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l4s7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-l7fm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lz8b\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l4s7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-l7fm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lz8b" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/152/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mbg1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-qub8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ymrh\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mbg1" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-qub8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ymrh" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mbg1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-qub8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ymrh\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mbg1" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-qub8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ymrh" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mbg1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-qub8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ymrh\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mbg1" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-qub8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ymrh" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/153/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-719a\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-cbzi\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uh63\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-719a" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-cbzi" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uh63" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lpj9\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uh63\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-czsf\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-lpj9" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uh63" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-czsf" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-719a\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-cbzi\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uh63\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-719a" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-cbzi" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uh63" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/154/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xm75\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-60ic\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-iepu\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-xm75" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-60ic" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-iepu" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-iepu\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uej9\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xm75\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-iepu" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uej9" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xm75" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uej9\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ueyp\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xm75\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-uej9" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ueyp" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xm75" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/155/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-f2g1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v7aq\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xjvy\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-f2g1" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v7aq" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xjvy" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mmk9\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v7aq\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xjvy\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-mmk9" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v7aq" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xjvy" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v7aq\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xjvy\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mmk9\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-v7aq" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xjvy" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mmk9" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/156/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-7zv5\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-k0st\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rzjb\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-7zv5" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-k0st" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rzjb" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4c38\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-k0st\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rzjb\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4c38" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-k0st" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rzjb" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4c38\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-k0st\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rzjb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-4c38" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-k0st" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rzjb" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/157/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyz0\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-chb4\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyub\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zyz0" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-chb4" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zyub" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-chb4\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyub\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyz0\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-chb4" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zyub" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zyz0" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-1oap\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyub\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zyz0\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-1oap" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zyub" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zyz0" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/158/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-h658\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rj77\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x5oo\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-h658" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rj77" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x5oo" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-34bg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-d353\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rj77\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-34bg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-d353" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rj77" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-34bg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-d353\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-rj77\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-34bg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-d353" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-rj77" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/159/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-kkfc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-qbdc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-fy56\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-kkfc" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-qbdc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-fy56" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-b6kv\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-kkfc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4y72\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-b6kv" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-kkfc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4y72" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-4y72\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-b6kv\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-kkfc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-4y72" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-b6kv" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-kkfc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/160/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-hnj3\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pp48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zxdc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-hnj3" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-pp48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zxdc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pp48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zxdc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-hnj3\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-pp48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zxdc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-hnj3" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-3wxj\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pp48\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zxdc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-3wxj" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-pp48" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zxdc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/161/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-o313\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tix5\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-u2bg\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-o313" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-tix5" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-u2bg" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tix5\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-u2bg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yd3h\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-tix5" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-u2bg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yd3h" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tix5\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-u2bg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-yd3h\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-tix5" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-u2bg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-yd3h" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/162/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mchh\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nmme\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w3x7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mchh" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-nmme" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w3x7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nmme\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-qf4n\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w3x7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-nmme" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-qf4n" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w3x7" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-41sb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-mchh\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nmme\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-41sb" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-mchh" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-nmme" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/163/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-kumn\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nday\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tcx2\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-kumn" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-nday" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-tcx2" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:217
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-j2tl\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nday\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tcx2\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-j2tl" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-nday" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-tcx2" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-j2tl\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-nday\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-tcx2\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-j2tl" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-nday" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-tcx2" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/217/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/218/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a04e50>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 24 11:38:29.275: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42116b2e0>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e92240>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211156c0>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc420ec9610>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:428

Issues about this test specifically: #27233

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Oct 24 10:25:45.240: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f119b0>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209647d0>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc421b4a050>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:428

Issues about this test specifically: #27470 #30156 #34304

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218bcb00>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212a9300>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []\nheapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-778ec jenkins-e2e-minion-group-fv6m Failed       []
    heapster-v1.2.0-1374379659-colt7 jenkins-e2e-minion-group-a0dy Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 24 11:46:48.995: Node jenkins-e2e-minion-group-a0dy did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/219/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/220/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/221/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/222/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/223/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 25 13:09:34.581: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42154bb00>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42154ab40>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fd7ea0>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4206b25b0>: {
        s: "couldn't find 21 pods within 5m0s; last error: expected to find 21 pods but found only 26",
    }
    couldn't find 21 pods within 5m0s; last error: expected to find 21 pods but found only 26
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:119

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421915420>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c3e40>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c6130>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209b3ac0>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211164a0>: {
        s: "2 / 39 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    2 / 39 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-304wf jenkins-e2e-minion-group-xvga Failed       []
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 25 13:17:59.545: Node jenkins-e2e-minion-group-xvga did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc4211d6c60>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                          PHASE  GRACE CONDITIONS\nheapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                          PHASE  GRACE CONDITIONS
    heapster-v1.2.0-1374379659-rtvwt jenkins-e2e-minion-group-xavi Failed       []

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:428

Issues about this test specifically: #27470 #30156 #34304

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 25 09:28:19.744: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-xavi:
 container "kubelet": expected 95th% usage < 0.220; got 0.230
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/228/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Oct 26 07:10:55.041: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Oct 26 06:50:10.805: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 26 08:40:13.445: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Oct 26 10:01:49.021: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Oct 26 07:41:01.187: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Oct 26 08:25:21.746: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 26 08:32:11.435: Node jenkins-e2e-minion-group-y8zd did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 26 09:18:09.528: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-y8zd:
 container "kubelet": expected 95th% usage < 0.220; got 0.222
node jenkins-e2e-minion-group-zqmi:
 container "kubelet": expected 95th% usage < 0.220; got 0.229
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Oct 26 08:05:11.924: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/237/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 27 21:12:45.616: Node jenkins-e2e-minion-group-sca7 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 27 22:35:23.461: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-sca7:
 container "kubelet": expected 95th% usage < 0.140; got 0.146
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 27 23:02:58.169: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-i9y8:
 container "kubelet": expected 50th% usage < 0.170; got 0.205, container "kubelet": expected 95th% usage < 0.220; got 0.280
node jenkins-e2e-minion-group-l6hu:
 container "kubelet": expected 50th% usage < 0.170; got 0.220, container "kubelet": expected 95th% usage < 0.220; got 0.313
node jenkins-e2e-minion-group-sca7:
 container "kubelet": expected 50th% usage < 0.170; got 0.216, container "kubelet": expected 95th% usage < 0.220; got 0.286
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 27 20:23:00.894: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/238/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/240/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 28 11:05:25.093: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 28 11:34:36.611: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-ba93:
 container "kubelet": expected 50th% usage < 0.170; got 0.238, container "kubelet": expected 95th% usage < 0.220; got 0.344
node jenkins-e2e-minion-group-oo35:
 container "kubelet": expected 50th% usage < 0.170; got 0.193, container "kubelet": expected 95th% usage < 0.220; got 0.272
node jenkins-e2e-minion-group-sznt:
 container "kubelet": expected 50th% usage < 0.170; got 0.222, container "kubelet": expected 95th% usage < 0.220; got 0.313
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 28 08:56:30.833: Node jenkins-e2e-minion-group-ba93 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 28 11:00:19.754: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-ba93:
 container "kubelet": expected 95th% usage < 0.140; got 0.191
node jenkins-e2e-minion-group-sznt:
 container "kubelet": expected 95th% usage < 0.140; got 0.148
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/243/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 29 00:26:45.309: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-ar4e:
 container "kubelet": expected 50th% usage < 0.170; got 0.175, container "kubelet": expected 95th% usage < 0.220; got 0.258
node jenkins-e2e-minion-group-d108:
 container "kubelet": expected 50th% usage < 0.170; got 0.174, container "kubelet": expected 95th% usage < 0.220; got 0.238
node jenkins-e2e-minion-group-spo7:
 container "kubelet": expected 50th% usage < 0.170; got 0.178, container "kubelet": expected 95th% usage < 0.220; got 0.246
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:640
Oct 29 00:32:20.290: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:361

Issues about this test specifically: #30187 #35293

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Oct 28 22:23:12.723: Node jenkins-e2e-minion-group-d108 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:330

Issues about this test specifically: #27324

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Oct 28 22:54:55.843: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-d108:
 container "kubelet": expected 95th% usage < 0.140; got 0.159
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

1 similar comment
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/276/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Nov  3 13:01:43.789: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-kk4o:
 container "kubelet": expected 95th% usage < 0.140; got 0.202
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:342
Expected error:
    <*errors.errorString | 0xc420e7c380>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:335

Issues about this test specifically: #27470 #30156 #34304

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Nov  3 10:26:58.473: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-kk4o:
 container "kubelet": expected 50th% usage < 0.170; got 0.244, container "kubelet": expected 95th% usage < 0.220; got 0.355
node jenkins-e2e-minion-group-qciu:
 container "kubelet": expected 50th% usage < 0.170; got 0.175, container "kubelet": expected 95th% usage < 0.220; got 0.232
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4213dc840>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.007494227s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.007494227s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gce-serial/278/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:316
Expected error:
    <*errors.errorString | 0xc421178050>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Nov  3 22:07:18.999: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-90h3:
 container "kubelet": expected 50th% usage < 0.170; got 0.206, container "kubelet": expected 95th% usage < 0.220; got 0.297
node jenkins-e2e-minion-group-qijs:
 container "kubelet": expected 50th% usage < 0.170; got 0.171, container "kubelet": expected 95th% usage < 0.220; got 0.285
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:279
Nov  3 22:51:35.043: CPU usage exceeding limits:
 node jenkins-e2e-minion-group-90h3:
 container "kubelet": expected 95th% usage < 0.140; got 0.187
node jenkins-e2e-minion-group-qijs:
 container "kubelet": expected 95th% usage < 0.140; got 0.169
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:342
Expected error:
    <*errors.errorString | 0xc420e78050>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:335

Issues about this test specifically: #27470 #30156 #34304

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4209273c0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.007186079s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.007186079s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

1 similar comment
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@calebamiles calebamiles added this to the v1.5 milestone Nov 9, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

3 similar comments
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@dims
Copy link
Member

dims commented Nov 17, 2016

"broken test run" tends to show different sets of failures over time. So closing a bunch of them now during 1.5 release triage to let the bot open fresh ones.

@dims dims closed this as completed Nov 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

5 participants