Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master: broken test run #37761

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 6 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/165/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213adda0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2wxh0] []  0xc8221dec80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8221df480 exit status 1 <nil> true [0xc82018e440 0xc82018e468 0xc82018e498] [0xc82018e440 0xc82018e468 0xc82018e498] [0xc82018e448 0xc82018e460 0xc82018e478] [0xafa5c0 0xafa720 0xafa720] 0xc821c89860}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2wxh0] []  0xc8221dec80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8221df480 exit status 1 <nil> true [0xc82018e440 0xc82018e468 0xc82018e498] [0xc82018e440 0xc82018e468 0xc82018e498] [0xc82018e448 0xc82018e460 0xc82018e478] [0xafa5c0 0xafa720 0xafa720] 0xc821c89860}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc82139e4a0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009704002s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009704002s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:56

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820895f60>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821ca6b70>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820fae350>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p5myn] []  0xc821df5b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82131c640 exit status 1 <nil> true [0xc8202ecd70 0xc8202ecdf0 0xc8202ece18] [0xc8202ecd70 0xc8202ecdf0 0xc8202ece18] [0xc8202ecd88 0xc8202ecde8 0xc8202ece10] [0xafa5c0 0xafa720 0xafa720] 0xc820daecc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p5myn] []  0xc821df5b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82131c640 exit status 1 <nil> true [0xc8202ecd70 0xc8202ecdf0 0xc8202ece18] [0xc8202ecd70 0xc8202ecdf0 0xc8202ece18] [0xc8202ecd88 0xc8202ecde8 0xc8202ece10] [0xafa5c0 0xafa720 0xafa720] 0xc820daecc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wjhex] []  0xc8221ee3a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8221ee9e0 exit status 1 <nil> true [0xc82018f4d8 0xc82018f500 0xc82018f510] [0xc82018f4d8 0xc82018f500 0xc82018f510] [0xc82018f4e0 0xc82018f4f8 0xc82018f508] [0xafa5c0 0xafa720 0xafa720] 0xc820bce780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wjhex] []  0xc8221ee3a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8221ee9e0 exit status 1 <nil> true [0xc82018f4d8 0xc82018f500 0xc82018f510] [0xc82018f4d8 0xc82018f500 0xc82018f510] [0xc82018f4e0 0xc82018f4f8 0xc82018f508] [0xafa5c0 0xafa720 0xafa720] 0xc820bce780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-z7ru7] []  0xc820c61400  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820c61b60 exit status 1 <nil> true [0xc8211e2438 0xc8211e2460 0xc8211e2480] [0xc8211e2438 0xc8211e2460 0xc8211e2480] [0xc8211e2440 0xc8211e2458 0xc8211e2470] [0xafa5c0 0xafa720 0xafa720] 0xc820ffe060}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-z7ru7] []  0xc820c61400  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820c61b60 exit status 1 <nil> true [0xc8211e2438 0xc8211e2460 0xc8211e2480] [0xc8211e2438 0xc8211e2460 0xc8211e2480] [0xc8211e2440 0xc8211e2458 0xc8211e2470] [0xafa5c0 0xafa720 0xafa720] 0xc820ffe060}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Nov 29 12:00:33.829: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:421

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820faa470>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:478
Expected error:
    <*errors.errorString | 0xc820ea4040>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:471

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc821d9cb70>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ou7cp] []  0xc820d1f5a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d1fc20 exit status 1 <nil> true [0xc8202ed3b8 0xc8202ed3e8 0xc8202ed400] [0xc8202ed3b8 0xc8202ed3e8 0xc8202ed400] [0xc8202ed3c8 0xc8202ed3e0 0xc8202ed3f8] [0xafa5c0 0xafa720 0xafa720] 0xc82125d2c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ou7cp] []  0xc820d1f5a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d1fc20 exit status 1 <nil> true [0xc8202ed3b8 0xc8202ed3e8 0xc8202ed400] [0xc8202ed3b8 0xc8202ed3e8 0xc8202ed400] [0xc8202ed3c8 0xc8202ed3e0 0xc8202ed3f8] [0xafa5c0 0xafa720 0xafa720] 0xc82125d2c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-a2zu8] []  0xc820ccb4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ccbc80 exit status 1 <nil> true [0xc821290250 0xc821290278 0xc821290288] [0xc821290250 0xc821290278 0xc821290288] [0xc821290258 0xc821290270 0xc821290280] [0xafa5c0 0xafa720 0xafa720] 0xc82095c720}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-a2zu8] []  0xc820ccb4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ccbc80 exit status 1 <nil> true [0xc821290250 0xc821290278 0xc821290288] [0xc821290250 0xc821290278 0xc821290288] [0xc821290258 0xc821290270 0xc821290280] [0xafa5c0 0xafa720 0xafa720] 0xc82095c720}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213d21b0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821506f50>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82018ab80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213fbf60>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213f31d0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820fb1b40>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lsbs1] []  0xc820d83a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82078e120 exit status 1 <nil> true [0xc8202ec490 0xc8202ec4f8 0xc8202ec510] [0xc8202ec490 0xc8202ec4f8 0xc8202ec510] [0xc8202ec4a0 0xc8202ec4e0 0xc8202ec508] [0xafa5c0 0xafa720 0xafa720] 0xc82152d080}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lsbs1] []  0xc820d83a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82078e120 exit status 1 <nil> true [0xc8202ec490 0xc8202ec4f8 0xc8202ec510] [0xc8202ec490 0xc8202ec4f8 0xc8202ec510] [0xc8202ec4a0 0xc8202ec4e0 0xc8202ec508] [0xafa5c0 0xafa720 0xafa720] 0xc82152d080}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821544620>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc82196cdc0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:357

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc8221f6ed0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:351

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-rsew4] []  0xc8218db0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8218db940 exit status 1 <nil> true [0xc821a225c8 0xc821a225f0 0xc821a22600] [0xc821a225c8 0xc821a225f0 0xc821a22600] [0xc821a225d0 0xc821a225e8 0xc821a225f8] [0xafa5c0 0xafa720 0xafa720] 0xc821c88060}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-rsew4] []  0xc8218db0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8218db940 exit status 1 <nil> true [0xc821a225c8 0xc821a225f0 0xc821a22600] [0xc821a225c8 0xc821a225f0 0xc821a22600] [0xc821a225d0 0xc821a225e8 0xc821a225f8] [0xafa5c0 0xafa720 0xafa720] 0xc821c88060}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210cc9a0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213d2ea0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-krlwp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T00:47:28Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-krlwp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-krlwp/services/redis-master\", \"uid\":\"918a3c5b-b696-11e6-b365-42010af00015\", \"resourceVersion\":\"37571\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.194\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821b58660 exit status 1 <nil> true [0xc8200901e0 0xc8200901f8 0xc820090228] [0xc8200901e0 0xc8200901f8 0xc820090228] [0xc8200901f0 0xc820090218] [0xafa720 0xafa720] 0xc821668240}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T00:47:28Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-krlwp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-krlwp/services/redis-master\", \"uid\":\"918a3c5b-b696-11e6-b365-42010af00015\", \"resourceVersion\":\"37571\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.194\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-krlwp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T00:47:28Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-krlwp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-krlwp/services/redis-master", "uid":"918a3c5b-b696-11e6-b365-42010af00015", "resourceVersion":"37571"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.194"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821b58660 exit status 1 <nil> true [0xc8200901e0 0xc8200901f8 0xc820090228] [0xc8200901e0 0xc8200901f8 0xc820090228] [0xc8200901f0 0xc820090218] [0xafa720 0xafa720] 0xc821668240}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T00:47:28Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-krlwp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-krlwp/services/redis-master", "uid":"918a3c5b-b696-11e6-b365-42010af00015", "resourceVersion":"37571"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.194"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821f664f0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-cxuoc] []  0xc8212fbf40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821236680 exit status 1 <nil> true [0xc821290ae0 0xc821290b08 0xc821290b18] [0xc821290ae0 0xc821290b08 0xc821290b18] [0xc821290ae8 0xc821290b00 0xc821290b10] [0xafa5c0 0xafa720 0xafa720] 0xc8218c3ec0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-cxuoc] []  0xc8212fbf40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821236680 exit status 1 <nil> true [0xc821290ae0 0xc821290b08 0xc821290b18] [0xc821290ae0 0xc821290b08 0xc821290b18] [0xc821290ae8 0xc821290b00 0xc821290b10] [0xafa5c0 0xafa720 0xafa720] 0xc8218c3ec0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82018ab80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-42j6l] []  0xc8217ae780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217aedc0 exit status 1 <nil> true [0xc8211e22b0 0xc8211e22d8 0xc8211e22e8] [0xc8211e22b0 0xc8211e22d8 0xc8211e22e8] [0xc8211e22b8 0xc8211e22d0 0xc8211e22e0] [0xafa5c0 0xafa720 0xafa720] 0xc82173c7e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.138.98 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-42j6l] []  0xc8217ae780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217aedc0 exit status 1 <nil> true [0xc8211e22b0 0xc8211e22d8 0xc8211e22e8] [0xc8211e22b0 0xc8211e22d8 0xc8211e22e8] [0xc8211e22b8 0xc8211e22d0 0xc8211e22e0] [0xafa5c0 0xafa720 0xafa720] 0xc82173c7e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82018ab80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210823c0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8212cce50>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82086acf0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/167/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821ccf2a0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xtrag] []  0xc820c0b920  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208d4100 exit status 1 <nil> true [0xc8217dc060 0xc8217dc088 0xc8217dc098] [0xc8217dc060 0xc8217dc088 0xc8217dc098] [0xc8217dc068 0xc8217dc080 0xc8217dc090] [0xafa5c0 0xafa720 0xafa720] 0xc8218d1200}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xtrag] []  0xc820c0b920  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208d4100 exit status 1 <nil> true [0xc8217dc060 0xc8217dc088 0xc8217dc098] [0xc8217dc060 0xc8217dc088 0xc8217dc098] [0xc8217dc068 0xc8217dc080 0xc8217dc090] [0xafa5c0 0xafa720 0xafa720] 0xc8218d1200}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820de7720>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-hg01a] []  0xc8214b83a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214b8a20 exit status 1 <nil> true [0xc8201a4320 0xc8201a43a0 0xc8201a43b8] [0xc8201a4320 0xc8201a43a0 0xc8201a43b8] [0xc8201a4338 0xc8201a4370 0xc8201a43a8] [0xafa5c0 0xafa720 0xafa720] 0xc8213fa1e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-hg01a] []  0xc8214b83a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214b8a20 exit status 1 <nil> true [0xc8201a4320 0xc8201a43a0 0xc8201a43b8] [0xc8201a4320 0xc8201a43a0 0xc8201a43b8] [0xc8201a4338 0xc8201a4370 0xc8201a43a8] [0xafa5c0 0xafa720 0xafa720] 0xc8213fa1e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82183ea60>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821339ff0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bydu1] []  0xc8214b80c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214b8740 exit status 1 <nil> true [0xc820d1c2e8 0xc820d1c338 0xc820d1c358] [0xc820d1c2e8 0xc820d1c338 0xc820d1c358] [0xc820d1c2f8 0xc820d1c328 0xc820d1c348] [0xafa5c0 0xafa720 0xafa720] 0xc8219d4240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bydu1] []  0xc8214b80c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214b8740 exit status 1 <nil> true [0xc820d1c2e8 0xc820d1c338 0xc820d1c358] [0xc820d1c2e8 0xc820d1c338 0xc820d1c358] [0xc820d1c2f8 0xc820d1c328 0xc820d1c348] [0xafa5c0 0xafa720 0xafa720] 0xc8219d4240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82192c8d0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xdhcr] []  0xc8207f6e00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8207f75c0 exit status 1 <nil> true [0xc820036570 0xc8200365e8 0xc8200365f8] [0xc820036570 0xc8200365e8 0xc8200365f8] [0xc820036588 0xc8200365d8 0xc8200365f0] [0xafa5c0 0xafa720 0xafa720] 0xc8214050e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xdhcr] []  0xc8207f6e00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8207f75c0 exit status 1 <nil> true [0xc820036570 0xc8200365e8 0xc8200365f8] [0xc820036570 0xc8200365e8 0xc8200365f8] [0xc820036588 0xc8200365d8 0xc8200365f0] [0xafa5c0 0xafa720 0xafa720] 0xc8214050e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8203ad530>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820a45440>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-c1s8o -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-c1s8o\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-c1s8o/services/redis-master\", \"uid\":\"c85eefe8-b6df-11e6-bbc9-42010af0002e\", \"resourceVersion\":\"43536\", \"creationTimestamp\":\"2016-11-30T09:31:33Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.254.181\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821bea0c0 exit status 1 <nil> true [0xc8201a46a0 0xc8201a4758 0xc8201a47a8] [0xc8201a46a0 0xc8201a4758 0xc8201a47a8] [0xc8201a4718 0xc8201a4778] [0xafa720 0xafa720] 0xc8213fb7a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-c1s8o\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-c1s8o/services/redis-master\", \"uid\":\"c85eefe8-b6df-11e6-bbc9-42010af0002e\", \"resourceVersion\":\"43536\", \"creationTimestamp\":\"2016-11-30T09:31:33Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.254.181\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-c1s8o -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-c1s8o", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-c1s8o/services/redis-master", "uid":"c85eefe8-b6df-11e6-bbc9-42010af0002e", "resourceVersion":"43536", "creationTimestamp":"2016-11-30T09:31:33Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"clusterIP":"10.127.254.181", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821bea0c0 exit status 1 <nil> true [0xc8201a46a0 0xc8201a4758 0xc8201a47a8] [0xc8201a46a0 0xc8201a4758 0xc8201a47a8] [0xc8201a4718 0xc8201a4778] [0xafa720 0xafa720] 0xc8213fb7a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-c1s8o", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-c1s8o/services/redis-master", "uid":"c85eefe8-b6df-11e6-bbc9-42010af0002e", "resourceVersion":"43536", "creationTimestamp":"2016-11-30T09:31:33Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"clusterIP":"10.127.254.181", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820d3de30>: {
        s: "failed to wait for pods responding: pod with UID 97e4edd8-b6cc-11e6-bbc9-42010af0002e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods 28877} [{{ } {my-hostname-delete-node-05ptu my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-05ptu 97e6719b-b6cc-11e6-bbc9-42010af0002e 28565 0 2016-11-29 23:14:11 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v2nhb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"97e31082-b6cc-11e6-bbc9-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"28547\"}}\n] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9457}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af7650 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9580 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-f4jz 0xc821cf2e80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  }]   10.240.0.2 10.124.2.138 2016-11-29 23:14:11 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225100 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3e159405e7ef2c795e98e014f118410494cbaac967497c3d0b0980daabfcd83e}]}} {{ } {my-hostname-delete-node-f4zha my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-f4zha 97e5140f-b6cc-11e6-bbc9-42010af0002e 28559 0 2016-11-29 23:14:11 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v2nhb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"97e31082-b6cc-11e6-bbc9-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"28547\"}}\n] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9887}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af76b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9980 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-yqio 0xc821cf2f40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  }]   10.240.0.3 10.124.1.214 2016-11-29 23:14:11 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1bb00e22aa41ee7f5568d6c08ea9c828de55a38f399077791bd0607843778261}]}} {{ } {my-hostname-delete-node-wda4a my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-wda4a cc93de93-b6cc-11e6-bbc9-42010af0002e 28714 0 2016-11-29 23:15:40 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v2nhb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"97e31082-b6cc-11e6-bbc9-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"28630\"}}\n] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9c17}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af7710 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9d20 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-yqio 0xc821cf3000 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:40 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:40 -0800 PST  }]   10.240.0.3 10.124.1.216 2016-11-29 23:15:40 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5cbc3d863820ef37e16d2c67cbb5772d8805079ce54ebb3b033ab2d617714962}]}}]}",
    }
    failed to wait for pods responding: pod with UID 97e4edd8-b6cc-11e6-bbc9-42010af0002e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods 28877} [{{ } {my-hostname-delete-node-05ptu my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-05ptu 97e6719b-b6cc-11e6-bbc9-42010af0002e 28565 0 2016-11-29 23:14:11 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v2nhb","name":"my-hostname-delete-node","uid":"97e31082-b6cc-11e6-bbc9-42010af0002e","apiVersion":"v1","resourceVersion":"28547"}}
    ] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9457}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af7650 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9580 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-f4jz 0xc821cf2e80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  }]   10.240.0.2 10.124.2.138 2016-11-29 23:14:11 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225100 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3e159405e7ef2c795e98e014f118410494cbaac967497c3d0b0980daabfcd83e}]}} {{ } {my-hostname-delete-node-f4zha my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-f4zha 97e5140f-b6cc-11e6-bbc9-42010af0002e 28559 0 2016-11-29 23:14:11 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v2nhb","name":"my-hostname-delete-node","uid":"97e31082-b6cc-11e6-bbc9-42010af0002e","apiVersion":"v1","resourceVersion":"28547"}}
    ] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9887}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af76b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9980 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-yqio 0xc821cf2f40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:14:11 -0800 PST  }]   10.240.0.3 10.124.1.214 2016-11-29 23:14:11 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1bb00e22aa41ee7f5568d6c08ea9c828de55a38f399077791bd0607843778261}]}} {{ } {my-hostname-delete-node-wda4a my-hostname-delete-node- e2e-tests-resize-nodes-v2nhb /api/v1/namespaces/e2e-tests-resize-nodes-v2nhb/pods/my-hostname-delete-node-wda4a cc93de93-b6cc-11e6-bbc9-42010af0002e 28714 0 2016-11-29 23:15:40 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v2nhb","name":"my-hostname-delete-node","uid":"97e31082-b6cc-11e6-bbc9-42010af0002e","apiVersion":"v1","resourceVersion":"28630"}}
    ] [{v1 ReplicationController my-hostname-delete-node 97e31082-b6cc-11e6-bbc9-42010af0002e 0xc8219e9c17}] [] } {[{default-token-qrhi8 {<nil> <nil> <nil> <nil> <nil> 0xc821af7710 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qrhi8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219e9d20 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-24b752c2-yqio 0xc821cf3000 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:40 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 23:15:40 -0800 PST  }]   10.240.0.3 10.124.1.216 2016-11-29 23:15:40 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821225140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5cbc3d863820ef37e16d2c67cbb5772d8805079ce54ebb3b033ab2d617714962}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821986b20>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201b2630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d730f0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820a9a0f0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213012c0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2t6q7] []  0xc8214c1ca0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8213ca640 exit status 1 <nil> true [0xc8200c43a8 0xc8200c43d0 0xc8200c43e0] [0xc8200c43a8 0xc8200c43d0 0xc8200c43e0] [0xc8200c43b0 0xc8200c43c8 0xc8200c43d8] [0xafa5c0 0xafa720 0xafa720] 0xc82171df80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2t6q7] []  0xc8214c1ca0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8213ca640 exit status 1 <nil> true [0xc8200c43a8 0xc8200c43d0 0xc8200c43e0] [0xc8200c43a8 0xc8200c43d0 0xc8200c43e0] [0xc8200c43b0 0xc8200c43c8 0xc8200c43d8] [0xafa5c0 0xafa720 0xafa720] 0xc82171df80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821cd6730>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d64bf0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n3yu6] []  0xc821d8bb40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821f8e1c0 exit status 1 <nil> true [0xc8201a4770 0xc8201a47f0 0xc8201a4810] [0xc8201a4770 0xc8201a47f0 0xc8201a4810] [0xc8201a4788 0xc8201a47d8 0xc8201a47f8] [0xafa5c0 0xafa720 0xafa720] 0xc821c73380}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n3yu6] []  0xc821d8bb40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821f8e1c0 exit status 1 <nil> true [0xc8201a4770 0xc8201a47f0 0xc8201a4810] [0xc8201a4770 0xc8201a47f0 0xc8201a4810] [0xc8201a4788 0xc8201a47d8 0xc8201a47f8] [0xafa5c0 0xafa720 0xafa720] 0xc821c73380}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821679ad0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8219d7c50>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201b2630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82032de80>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jlfm6] []  0xc821bc4340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821bc49e0 exit status 1 <nil> true [0xc8217dc048 0xc8217dc0a8 0xc8217dc0b8] [0xc8217dc048 0xc8217dc0a8 0xc8217dc0b8] [0xc8217dc068 0xc8217dc098 0xc8217dc0b0] [0xafa5c0 0xafa720 0xafa720] 0xc8220c82a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jlfm6] []  0xc821bc4340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821bc49e0 exit status 1 <nil> true [0xc8217dc048 0xc8217dc0a8 0xc8217dc0b8] [0xc8217dc048 0xc8217dc0a8 0xc8217dc0b8] [0xc8217dc068 0xc8217dc098 0xc8217dc0b0] [0xafa5c0 0xafa720 0xafa720] 0xc8220c82a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820a4db50>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201b2630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Expected error:
    <*errors.errorString | 0xc821520890>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:427

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-f1mso] []  0xc820d9ca20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d9d180 exit status 1 <nil> true [0xc820036510 0xc8200365d8 0xc820036608] [0xc820036510 0xc8200365d8 0xc820036608] [0xc820036570 0xc8200365b0 0xc8200365e8] [0xafa5c0 0xafa720 0xafa720] 0xc8214059e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-f1mso] []  0xc820d9ca20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d9d180 exit status 1 <nil> true [0xc820036510 0xc8200365d8 0xc820036608] [0xc820036510 0xc8200365d8 0xc820036608] [0xc820036570 0xc8200365b0 0xc8200365e8] [0xafa5c0 0xafa720 0xafa720] 0xc8214059e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mvhrf] []  0xc820e36c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e37720 exit status 1 <nil> true [0xc821350638 0xc821350660 0xc821350670] [0xc821350638 0xc821350660 0xc821350670] [0xc821350640 0xc821350658 0xc821350668] [0xafa5c0 0xafa720 0xafa720] 0xc8212efda0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mvhrf] []  0xc820e36c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e37720 exit status 1 <nil> true [0xc821350638 0xc821350660 0xc821350670] [0xc821350638 0xc821350660 0xc821350670] [0xc821350640 0xc821350658 0xc821350668] [0xafa5c0 0xafa720 0xafa720] 0xc8212efda0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gfuwt] []  0xc8211a59a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214c0040 exit status 1 <nil> true [0xc8205cc3a8 0xc8205cc3d0 0xc8205cc3e8] [0xc8205cc3a8 0xc8205cc3d0 0xc8205cc3e8] [0xc8205cc3b0 0xc8205cc3c8 0xc8205cc3e0] [0xafa5c0 0xafa720 0xafa720] 0xc8212239e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gfuwt] []  0xc8211a59a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214c0040 exit status 1 <nil> true [0xc8205cc3a8 0xc8205cc3d0 0xc8205cc3e8] [0xc8205cc3a8 0xc8205cc3d0 0xc8205cc3e8] [0xc8205cc3b0 0xc8205cc3c8 0xc8205cc3e0] [0xafa5c0 0xafa720 0xafa720] 0xc8212239e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/168/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-820lp] []  0xc8216502a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8216508e0 exit status 1 <nil> true [0xc820d18708 0xc820d18738 0xc820d18758] [0xc820d18708 0xc820d18738 0xc820d18758] [0xc820d18710 0xc820d18730 0xc820d18748] [0xafa5c0 0xafa720 0xafa720] 0xc8215f8240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-820lp] []  0xc8216502a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8216508e0 exit status 1 <nil> true [0xc820d18708 0xc820d18738 0xc820d18758] [0xc820d18708 0xc820d18738 0xc820d18758] [0xc820d18710 0xc820d18730 0xc820d18748] [0xafa5c0 0xafa720 0xafa720] 0xc8215f8240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820184a90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l9c2x] []  0xc820be4a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820be52a0 exit status 1 <nil> true [0xc820e3a530 0xc820e3a580 0xc820e3a5a0] [0xc820e3a530 0xc820e3a580 0xc820e3a5a0] [0xc820e3a540 0xc820e3a570 0xc820e3a590] [0xafa5c0 0xafa720 0xafa720] 0xc820951d40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l9c2x] []  0xc820be4a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820be52a0 exit status 1 <nil> true [0xc820e3a530 0xc820e3a580 0xc820e3a5a0] [0xc820e3a530 0xc820e3a580 0xc820e3a5a0] [0xc820e3a540 0xc820e3a570 0xc820e3a590] [0xafa5c0 0xafa720 0xafa720] 0xc820951d40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n0d11] []  0xc820ec7ac0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821608180 exit status 1 <nil> true [0xc821012c80 0xc821012ca8 0xc821012cb8] [0xc821012c80 0xc821012ca8 0xc821012cb8] [0xc821012c88 0xc821012ca0 0xc821012cb0] [0xafa5c0 0xafa720 0xafa720] 0xc820f133e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n0d11] []  0xc820ec7ac0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821608180 exit status 1 <nil> true [0xc821012c80 0xc821012ca8 0xc821012cb8] [0xc821012c80 0xc821012ca8 0xc821012cb8] [0xc821012c88 0xc821012ca0 0xc821012cb0] [0xafa5c0 0xafa720 0xafa720] 0xc820f133e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4b8k9] []  0xc820f870a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820f87d40 exit status 1 <nil> true [0xc820e3a1a0 0xc820e3a200 0xc820e3a248] [0xc820e3a1a0 0xc820e3a200 0xc820e3a248] [0xc820e3a1b0 0xc820e3a1e8 0xc820e3a210] [0xafa5c0 0xafa720 0xafa720] 0xc820950300}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4b8k9] []  0xc820f870a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820f87d40 exit status 1 <nil> true [0xc820e3a1a0 0xc820e3a200 0xc820e3a248] [0xc820e3a1a0 0xc820e3a200 0xc820e3a248] [0xc820e3a1b0 0xc820e3a1e8 0xc820e3a210] [0xafa5c0 0xafa720 0xafa720] 0xc820950300}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820184a90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-9v887 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"47ad12ed-b709-11e6-81cf-42010af0002a\", \"resourceVersion\":\"25336\", \"creationTimestamp\":\"2016-11-30T14:28:36Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-9v887\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-9v887/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.16\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821475ce0 exit status 1 <nil> true [0xc820e3a288 0xc820e3a2a0 0xc820e3a2c0] [0xc820e3a288 0xc820e3a2a0 0xc820e3a2c0] [0xc820e3a298 0xc820e3a2b0] [0xafa720 0xafa720] 0xc821c728a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"47ad12ed-b709-11e6-81cf-42010af0002a\", \"resourceVersion\":\"25336\", \"creationTimestamp\":\"2016-11-30T14:28:36Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-9v887\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-9v887/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.16\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-9v887 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"47ad12ed-b709-11e6-81cf-42010af0002a", "resourceVersion":"25336", "creationTimestamp":"2016-11-30T14:28:36Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-9v887", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-9v887/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.16", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821475ce0 exit status 1 <nil> true [0xc820e3a288 0xc820e3a2a0 0xc820e3a2c0] [0xc820e3a288 0xc820e3a2a0 0xc820e3a2c0] [0xc820e3a298 0xc820e3a2b0] [0xafa720 0xafa720] 0xc821c728a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"47ad12ed-b709-11e6-81cf-42010af0002a", "resourceVersion":"25336", "creationTimestamp":"2016-11-30T14:28:36Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-9v887", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-9v887/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.16", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p1fp4] []  0xc8205742a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820574d60 exit status 1 <nil> true [0xc821012778 0xc8210127a0 0xc8210127b0] [0xc821012778 0xc8210127a0 0xc8210127b0] [0xc821012780 0xc821012798 0xc8210127a8] [0xafa5c0 0xafa720 0xafa720] 0xc820cefc20}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p1fp4] []  0xc8205742a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820574d60 exit status 1 <nil> true [0xc821012778 0xc8210127a0 0xc8210127b0] [0xc821012778 0xc8210127a0 0xc8210127b0] [0xc821012780 0xc821012798 0xc8210127a8] [0xafa5c0 0xafa720 0xafa720] 0xc820cefc20}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820184a90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3z8kx] []  0xc8207dc740  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8207dcd80 exit status 1 <nil> true [0xc820180190 0xc820180220 0xc8201803b8] [0xc820180190 0xc820180220 0xc8201803b8] [0xc8201801b0 0xc8201801c8 0xc820180228] [0xafa5c0 0xafa720 0xafa720] 0xc820d9e6c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3z8kx] []  0xc8207dc740  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8207dcd80 exit status 1 <nil> true [0xc820180190 0xc820180220 0xc8201803b8] [0xc820180190 0xc820180220 0xc8201803b8] [0xc8201801b0 0xc8201801c8 0xc820180228] [0xafa5c0 0xafa720 0xafa720] 0xc820d9e6c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pkpt9] []  0xc820bb30a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bb3880 exit status 1 <nil> true [0xc820d18558 0xc820d18580 0xc820d18590] [0xc820d18558 0xc820d18580 0xc820d18590] [0xc820d18560 0xc820d18578 0xc820d18588] [0xafa5c0 0xafa720 0xafa720] 0xc8207ecde0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pkpt9] []  0xc820bb30a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bb3880 exit status 1 <nil> true [0xc820d18558 0xc820d18580 0xc820d18590] [0xc820d18558 0xc820d18580 0xc820d18590] [0xc820d18560 0xc820d18578 0xc820d18588] [0xafa5c0 0xafa720 0xafa720] 0xc8207ecde0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1rcsg] []  0xc820bce620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bceda0 exit status 1 <nil> true [0xc82133c0e8 0xc82133c110 0xc82133c120] [0xc82133c0e8 0xc82133c110 0xc82133c120] [0xc82133c0f0 0xc82133c108 0xc82133c118] [0xafa5c0 0xafa720 0xafa720] 0xc822242cc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1rcsg] []  0xc820bce620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bceda0 exit status 1 <nil> true [0xc82133c0e8 0xc82133c110 0xc82133c120] [0xc82133c0e8 0xc82133c110 0xc82133c120] [0xc82133c0f0 0xc82133c108 0xc82133c118] [0xafa5c0 0xafa720 0xafa720] 0xc822242cc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6rgdz] []  0xc8207e0d00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8207e1340 exit status 1 <nil> true [0xc820036498 0xc8200364e0 0xc8200364f8] [0xc820036498 0xc8200364e0 0xc8200364f8] [0xc8200364a0 0xc8200364d0 0xc8200364f0] [0xafa5c0 0xafa720 0xafa720] 0xc82099cc60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6rgdz] []  0xc8207e0d00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8207e1340 exit status 1 <nil> true [0xc820036498 0xc8200364e0 0xc8200364f8] [0xc820036498 0xc8200364e0 0xc8200364f8] [0xc8200364a0 0xc8200364d0 0xc8200364f0] [0xafa5c0 0xafa720 0xafa720] 0xc82099cc60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-11zxz] []  0xc821b34320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b349a0 exit status 1 <nil> true [0xc820d18018 0xc820d18060 0xc820d18070] [0xc820d18018 0xc820d18060 0xc820d18070] [0xc820d18028 0xc820d18058 0xc820d18068] [0xafa5c0 0xafa720 0xafa720] 0xc8213cc480}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-11zxz] []  0xc821b34320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b349a0 exit status 1 <nil> true [0xc820d18018 0xc820d18060 0xc820d18070] [0xc820d18018 0xc820d18060 0xc820d18070] [0xc820d18028 0xc820d18058 0xc820d18068] [0xafa5c0 0xafa720 0xafa720] 0xc8213cc480}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/169/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0nkt3] []  0xc8210b65a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8210b6c20 exit status 1 <nil> true [0xc8217bdae8 0xc8217bdb10 0xc8217bdb20] [0xc8217bdae8 0xc8217bdb10 0xc8217bdb20] [0xc8217bdaf0 0xc8217bdb08 0xc8217bdb18] [0xafa5c0 0xafa720 0xafa720] 0xc82083e780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0nkt3] []  0xc8210b65a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8210b6c20 exit status 1 <nil> true [0xc8217bdae8 0xc8217bdb10 0xc8217bdb20] [0xc8217bdae8 0xc8217bdb10 0xc8217bdb20] [0xc8217bdaf0 0xc8217bdb08 0xc8217bdb18] [0xafa5c0 0xafa720 0xafa720] 0xc82083e780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9b97x] []  0xc8205d3f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bae720 exit status 1 <nil> true [0xc820d84590 0xc820d845b8 0xc820d845c8] [0xc820d84590 0xc820d845b8 0xc820d845c8] [0xc820d84598 0xc820d845b0 0xc820d845c0] [0xafa5c0 0xafa720 0xafa720] 0xc820b14ae0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9b97x] []  0xc8205d3f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bae720 exit status 1 <nil> true [0xc820d84590 0xc820d845b8 0xc820d845c8] [0xc820d84590 0xc820d845b8 0xc820d845c8] [0xc820d84598 0xc820d845b0 0xc820d845c0] [0xafa5c0 0xafa720 0xafa720] 0xc820b14ae0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820178ac0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n7v8t] []  0xc8218bab20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8218bb1c0 exit status 1 <nil> true [0xc820d84850 0xc820d848a8 0xc820d848c8] [0xc820d84850 0xc820d848a8 0xc820d848c8] [0xc820d84860 0xc820d84898 0xc820d848c0] [0xafa5c0 0xafa720 0xafa720] 0xc820dd3680}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n7v8t] []  0xc8218bab20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8218bb1c0 exit status 1 <nil> true [0xc820d84850 0xc820d848a8 0xc820d848c8] [0xc820d84850 0xc820d848a8 0xc820d848c8] [0xc820d84860 0xc820d84898 0xc820d848c0] [0xafa5c0 0xafa720 0xafa720] 0xc820dd3680}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3qjc3] []  0xc8211fd360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ffa020 exit status 1 <nil> true [0xc8217bcbc0 0xc8217bcbe8 0xc8217bcbf8] [0xc8217bcbc0 0xc8217bcbe8 0xc8217bcbf8] [0xc8217bcbc8 0xc8217bcbe0 0xc8217bcbf0] [0xafa5c0 0xafa720 0xafa720] 0xc8216711a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3qjc3] []  0xc8211fd360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ffa020 exit status 1 <nil> true [0xc8217bcbc0 0xc8217bcbe8 0xc8217bcbf8] [0xc8217bcbc0 0xc8217bcbe8 0xc8217bcbf8] [0xc8217bcbc8 0xc8217bcbe0 0xc8217bcbf0] [0xafa5c0 0xafa720 0xafa720] 0xc8216711a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820f358c0>: {
        s: "failed to wait for pods responding: pod with UID 566e81a4-b737-11e6-8145-42010af0001f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods 12699} [{{ } {my-hostname-delete-node-33sgr my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-33sgr 89b0a261-b737-11e6-8145-42010af0001f 12547 0 2016-11-30 11:59:44 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kbfr3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"566c9fe1-b737-11e6-8145-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12483\"}}\n] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119af17}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc821331230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b010 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-85bz 0xc821ce6cc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  }]   10.240.0.4 10.124.2.132 2016-11-30 11:59:44 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8214039e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://736411025aa89d39342a74a36443710722ea76be61df63533c83cbea94771666}]}} {{ } {my-hostname-delete-node-3rjmk my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-3rjmk 566eba1f-b737-11e6-8145-42010af0001f 12405 0 2016-11-30 11:58:18 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kbfr3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"566c9fe1-b737-11e6-8145-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12392\"}}\n] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119b2a7}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc821331290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b3a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-jb3n 0xc821ce6d80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  }]   10.240.0.2 10.124.0.206 2016-11-30 11:58:18 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821403a00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://02701e8d5528cb14e36c475addfd2d09ee523190a68138546a0e8ebcfd819478}]}} {{ } {my-hostname-delete-node-5nrnn my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-5nrnn 566ea437-b737-11e6-8145-42010af0001f 12407 0 2016-11-30 11:58:18 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kbfr3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"566c9fe1-b737-11e6-8145-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12392\"}}\n] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119b637}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc8213312f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b730 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-jb3n 0xc821ce6e40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  }]   10.240.0.2 10.124.0.205 2016-11-30 11:58:18 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821403a20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b4990662bb0290612cefb2b28600e71d22d90694ecb3557eb8df45ff3d6f9d31}]}}]}",
    }
    failed to wait for pods responding: pod with UID 566e81a4-b737-11e6-8145-42010af0001f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods 12699} [{{ } {my-hostname-delete-node-33sgr my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-33sgr 89b0a261-b737-11e6-8145-42010af0001f 12547 0 2016-11-30 11:59:44 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kbfr3","name":"my-hostname-delete-node","uid":"566c9fe1-b737-11e6-8145-42010af0001f","apiVersion":"v1","resourceVersion":"12483"}}
    ] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119af17}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc821331230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b010 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-85bz 0xc821ce6cc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:59:44 -0800 PST  }]   10.240.0.4 10.124.2.132 2016-11-30 11:59:44 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8214039e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://736411025aa89d39342a74a36443710722ea76be61df63533c83cbea94771666}]}} {{ } {my-hostname-delete-node-3rjmk my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-3rjmk 566eba1f-b737-11e6-8145-42010af0001f 12405 0 2016-11-30 11:58:18 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kbfr3","name":"my-hostname-delete-node","uid":"566c9fe1-b737-11e6-8145-42010af0001f","apiVersion":"v1","resourceVersion":"12392"}}
    ] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119b2a7}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc821331290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b3a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-jb3n 0xc821ce6d80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  }]   10.240.0.2 10.124.0.206 2016-11-30 11:58:18 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821403a00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://02701e8d5528cb14e36c475addfd2d09ee523190a68138546a0e8ebcfd819478}]}} {{ } {my-hostname-delete-node-5nrnn my-hostname-delete-node- e2e-tests-resize-nodes-kbfr3 /api/v1/namespaces/e2e-tests-resize-nodes-kbfr3/pods/my-hostname-delete-node-5nrnn 566ea437-b737-11e6-8145-42010af0001f 12407 0 2016-11-30 11:58:18 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kbfr3","name":"my-hostname-delete-node","uid":"566c9fe1-b737-11e6-8145-42010af0001f","apiVersion":"v1","resourceVersion":"12392"}}
    ] [{v1 ReplicationController my-hostname-delete-node 566c9fe1-b737-11e6-8145-42010af0001f 0xc82119b637}] [] } {[{default-token-mtkb4 {<nil> <nil> <nil> <nil> <nil> 0xc8213312f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mtkb4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82119b730 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-d9f18bc1-jb3n 0xc821ce6e40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 11:58:18 -0800 PST  }]   10.240.0.2 10.124.0.205 2016-11-30 11:58:18 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821403a20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b4990662bb0290612cefb2b28600e71d22d90694ecb3557eb8df45ff3d6f9d31}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l3k9l -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-l3k9l\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l3k9l/services/redis-master\", \"uid\":\"65f2212e-b75a-11e6-8145-42010af0001f\", \"resourceVersion\":\"43858\", \"creationTimestamp\":\"2016-12-01T00:09:16Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.245.49\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82020b180 exit status 1 <nil> true [0xc8200b6328 0xc8200b6390 0xc8200b63f8] [0xc8200b6328 0xc8200b6390 0xc8200b63f8] [0xc8200b6368 0xc8200b63e8] [0xafa720 0xafa720] 0xc821af8d80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-l3k9l\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l3k9l/services/redis-master\", \"uid\":\"65f2212e-b75a-11e6-8145-42010af0001f\", \"resourceVersion\":\"43858\", \"creationTimestamp\":\"2016-12-01T00:09:16Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.245.49\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l3k9l -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-l3k9l", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l3k9l/services/redis-master", "uid":"65f2212e-b75a-11e6-8145-42010af0001f", "resourceVersion":"43858", "creationTimestamp":"2016-12-01T00:09:16Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"clusterIP":"10.127.245.49", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82020b180 exit status 1 <nil> true [0xc8200b6328 0xc8200b6390 0xc8200b63f8] [0xc8200b6328 0xc8200b6390 0xc8200b63f8] [0xc8200b6368 0xc8200b63e8] [0xafa720 0xafa720] 0xc821af8d80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-l3k9l", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l3k9l/services/redis-master", "uid":"65f2212e-b75a-11e6-8145-42010af0001f", "resourceVersion":"43858", "creationTimestamp":"2016-12-01T00:09:16Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"clusterIP":"10.127.245.49", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820178ac0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820178ac0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bzx22] []  0xc821094320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821094940 exit status 1 <nil> true [0xc821124090 0xc8211240b8 0xc8211240c8] [0xc821124090 0xc8211240b8 0xc8211240c8] [0xc821124098 0xc8211240b0 0xc8211240c0] [0xafa5c0 0xafa720 0xafa720] 0xc821f0a180}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bzx22] []  0xc821094320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821094940 exit status 1 <nil> true [0xc821124090 0xc8211240b8 0xc8211240c8] [0xc821124090 0xc8211240b8 0xc8211240c8] [0xc821124098 0xc8211240b0 0xc8211240c0] [0xafa5c0 0xafa720 0xafa720] 0xc821f0a180}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g5n60] []  0xc820bafaa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ab4360 exit status 1 <nil> true [0xc8200b6a68 0xc8200b6b08 0xc8200b6b28] [0xc8200b6a68 0xc8200b6b08 0xc8200b6b28] [0xc8200b6aa8 0xc8200b6af8 0xc8200b6b18] [0xafa5c0 0xafa720 0xafa720] 0xc821208960}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g5n60] []  0xc820bafaa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ab4360 exit status 1 <nil> true [0xc8200b6a68 0xc8200b6b08 0xc8200b6b28] [0xc8200b6a68 0xc8200b6b08 0xc8200b6b28] [0xc8200b6aa8 0xc8200b6af8 0xc8200b6b18] [0xafa5c0 0xafa720 0xafa720] 0xc821208960}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vjzsl] []  0xc8213bcb00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8213bd180 exit status 1 <nil> true [0xc82017aca0 0xc82017ad08 0xc82017ad20] [0xc82017aca0 0xc82017ad08 0xc82017ad20] [0xc82017aca8 0xc82017ad00 0xc82017ad18] [0xafa5c0 0xafa720 0xafa720] 0xc820a70960}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vjzsl] []  0xc8213bcb00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8213bd180 exit status 1 <nil> true [0xc82017aca0 0xc82017ad08 0xc82017ad20] [0xc82017aca0 0xc82017ad08 0xc82017ad20] [0xc82017aca8 0xc82017ad00 0xc82017ad18] [0xafa5c0 0xafa720 0xafa720] 0xc820a70960}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7mz6w] []  0xc821822340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821822a60 exit status 1 <nil> true [0xc82050a638 0xc82050a6a0 0xc82050a6d8] [0xc82050a638 0xc82050a6a0 0xc82050a6d8] [0xc82050a650 0xc82050a690 0xc82050a6a8] [0xafa5c0 0xafa720 0xafa720] 0xc821e97c20}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7mz6w] []  0xc821822340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821822a60 exit status 1 <nil> true [0xc82050a638 0xc82050a6a0 0xc82050a6d8] [0xc82050a638 0xc82050a6a0 0xc82050a6d8] [0xc82050a650 0xc82050a690 0xc82050a6a8] [0xafa5c0 0xafa720 0xafa720] 0xc821e97c20}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-kbxqx] []  0xc820d87340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d87bc0 exit status 1 <nil> true [0xc82183e530 0xc82183e558 0xc82183e568] [0xc82183e530 0xc82183e558 0xc82183e568] [0xc82183e538 0xc82183e550 0xc82183e560] [0xafa5c0 0xafa720 0xafa720] 0xc82114ca80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-kbxqx] []  0xc820d87340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d87bc0 exit status 1 <nil> true [0xc82183e530 0xc82183e558 0xc82183e568] [0xc82183e530 0xc82183e558 0xc82183e568] [0xc82183e538 0xc82183e550 0xc82183e560] [0xafa5c0 0xafa720 0xafa720] 0xc82114ca80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bfvjn] []  0xc820991440  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820991e20 exit status 1 <nil> true [0xc82050aff0 0xc82050b018 0xc82050b028] [0xc82050aff0 0xc82050b018 0xc82050b028] [0xc82050aff8 0xc82050b010 0xc82050b020] [0xafa5c0 0xafa720 0xafa720] 0xc820f7d980}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.161.120 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bfvjn] []  0xc820991440  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820991e20 exit status 1 <nil> true [0xc82050aff0 0xc82050b018 0xc82050b028] [0xc82050aff0 0xc82050b018 0xc82050b028] [0xc82050aff8 0xc82050b010 0xc82050b020] [0xafa5c0 0xafa720 0xafa720] 0xc820f7d980}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/171/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201a8a00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201a8a00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.208.177 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jjgmv -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.240.59\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"33285\", \"creationTimestamp\":\"2016-12-01T06:01:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jjgmv\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jjgmv/services/redis-master\", \"uid\":\"89089b0f-b78b-11e6-9851-42010af0002c\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820947600 exit status 1 <nil> true [0xc820036a58 0xc820036aa0 0xc820036ac0] [0xc820036a58 0xc820036aa0 0xc820036ac0] [0xc820036a90 0xc820036ab0] [0xafa720 0xafa720] 0xc821c8c8a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.240.59\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"33285\", \"creationTimestamp\":\"2016-12-01T06:01:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jjgmv\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jjgmv/services/redis-master\", \"uid\":\"89089b0f-b78b-11e6-9851-42010af0002c\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.208.177 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jjgmv -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.240.59", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"33285", "creationTimestamp":"2016-12-01T06:01:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-jjgmv", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jjgmv/services/redis-master", "uid":"89089b0f-b78b-11e6-9851-42010af0002c"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820947600 exit status 1 <nil> true [0xc820036a58 0xc820036aa0 0xc820036ac0] [0xc820036a58 0xc820036aa0 0xc820036ac0] [0xc820036a90 0xc820036ab0] [0xafa720 0xafa720] 0xc821c8c8a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.240.59", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"33285", "creationTimestamp":"2016-12-01T06:01:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-jjgmv", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jjgmv/services/redis-master", "uid":"89089b0f-b78b-11e6-9851-42010af0002c"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201a8a00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/172/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q10n4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-q10n4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q10n4/services/redis-master\", \"uid\":\"901a226f-b7bd-11e6-a9a9-42010af00020\", \"resourceVersion\":\"21760\", \"creationTimestamp\":\"2016-12-01T11:59:07Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.181\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820fa0880 exit status 1 <nil> true [0xc82048c5b0 0xc82048cd48 0xc82048c0b0] [0xc82048c5b0 0xc82048cd48 0xc82048c0b0] [0xc82048cd38 0xc82048c0a8] [0xafa720 0xafa720] 0xc820b421e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-q10n4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q10n4/services/redis-master\", \"uid\":\"901a226f-b7bd-11e6-a9a9-42010af00020\", \"resourceVersion\":\"21760\", \"creationTimestamp\":\"2016-12-01T11:59:07Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.181\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q10n4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-q10n4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q10n4/services/redis-master", "uid":"901a226f-b7bd-11e6-a9a9-42010af00020", "resourceVersion":"21760", "creationTimestamp":"2016-12-01T11:59:07Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.181", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820fa0880 exit status 1 <nil> true [0xc82048c5b0 0xc82048cd48 0xc82048c0b0] [0xc82048c5b0 0xc82048cd48 0xc82048c0b0] [0xc82048cd38 0xc82048c0a8] [0xafa720 0xafa720] 0xc820b421e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-q10n4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q10n4/services/redis-master", "uid":"901a226f-b7bd-11e6-a9a9-42010af00020", "resourceVersion":"21760", "creationTimestamp":"2016-12-01T11:59:07Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.181", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820101610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820101610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820101610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-master/173/

Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82007fa70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hrfnc -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"5c118460-b815-11e6-aceb-42010af0000b\", \"resourceVersion\":\"47533\", \"creationTimestamp\":\"2016-12-01T22:27:35Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-hrfnc\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hrfnc/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.255.213\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8209006e0 exit status 1 <nil> true [0xc821984000 0xc821984018 0xc821984038] [0xc821984000 0xc821984018 0xc821984038] [0xc821984010 0xc821984030] [0xafa720 0xafa720] 0xc820eb44e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"5c118460-b815-11e6-aceb-42010af0000b\", \"resourceVersion\":\"47533\", \"creationTimestamp\":\"2016-12-01T22:27:35Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-hrfnc\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hrfnc/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.255.213\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hrfnc -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"5c118460-b815-11e6-aceb-42010af0000b", "resourceVersion":"47533", "creationTimestamp":"2016-12-01T22:27:35Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-hrfnc", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hrfnc/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.255.213", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8209006e0 exit status 1 <nil> true [0xc821984000 0xc821984018 0xc821984038] [0xc821984000 0xc821984018 0xc821984038] [0xc821984010 0xc821984030] [0xafa720 0xafa720] 0xc820eb44e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"5c118460-b815-11e6-aceb-42010af0000b", "resourceVersion":"47533", "creationTimestamp":"2016-12-01T22:27:35Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-hrfnc", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hrfnc/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.255.213", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007fa70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007fa70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants