Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gce-serial: broken test run #34183

Closed
k8s-github-robot opened this issue Oct 6, 2016 · 6 comments
Closed

kubernetes-e2e-gce-serial: broken test run #34183

k8s-github-robot opened this issue Oct 6, 2016 · 6 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/-1/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Oct 6, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2314/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2318/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-5c2fecf1-8fc6-11e6-a789-0242ac110002-4uwxw to enter running state
Expected error:
    <*errors.StatusError | 0xc821066400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"\" not found",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:395

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-928c\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lg3k\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uojg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-928c" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lg3k" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uojg" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Issues about this test specifically: #30441

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc820f680c0>: {
        s: "couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 30",
    }
    couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 30
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-928c\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-lg3k\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-uojg\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-928c" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-lg3k" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-uojg" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:144
Expected error:
    <*errors.errorString | 0xc8203e0e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:235

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 11, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2322/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-y9nb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zd3z\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x8hg\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-y9nb" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zd3z" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x8hg" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc821c724e0>: {
        s: "couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 30",
    }
    couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 30
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x8hg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zd3z\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ollm\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-x8hg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zd3z" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ollm" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-x8hg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-y9nb\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zd3z\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-x8hg" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-y9nb" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zd3z" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 12, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2325/

Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc82023bbc0>: {
        s: "couldn't find 27 pods within 5m0s; last error: expected to find 27 pods but found only 28",
    }
    couldn't find 27 pods within 5m0s; last error: expected to find 27 pods but found only 28
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ao16\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pdhk\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-vgj9\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-ao16" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-pdhk" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-vgj9" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-ao16\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pdhk\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-vgj9\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-ao16" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-pdhk" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-vgj9" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-msy7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-pdhk\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-vgj9\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-msy7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-pdhk" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-vgj9" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2326/

Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc821316720>: {
        s: "couldn't find 27 pods within 5m0s; last error: expected to find 27 pods but found only 28",
    }
    couldn't find 27 pods within 5m0s; last error: expected to find 27 pods but found only 28
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w6qm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xhtw\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9gng\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-w6qm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xhtw" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-9gng" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9gng\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w6qm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xhtw\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-9gng" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w6qm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xhtw" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-9gng\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-w6qm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-xhtw\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-9gng" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-w6qm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-xhtw" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:142
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:932

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

3 participants