Skip to content

e2e flake: Daemon set should run and stop complex daemon #16623

Closed
@a-robinson

Description

This has been flaking pretty often lately - in 5 out of the last 11 runs of kubernetes-e2e-gce-flaky, to be specific. I've copied the output of one such failure run below.

Sorry if there's already an issue tracking this, I couldn't find anything other than #14072, which is for a different test.

@mikedanese

14:35:47 Daemon set 
14:35:47   should run and stop complex daemon
14:35:47   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
14:35:47 [BeforeEach] Daemon set
14:35:47   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
14:35:47 STEP: Creating a kubernetes client
14:35:47 >>> testContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-e2e-gce-flaky/workspace/.kube/config
14:35:47 STEP: Building a namespace api object
14:35:47 Oct 30 14:35:47.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-40nfz
14:35:47 Oct 30 14:35:47.268: INFO: Get service account default in ns e2e-tests-daemonsets-40nfz failed, ignoring for 2s: serviceaccounts "default" not found
14:35:49 Oct 30 14:35:49.271: INFO: Service account default in ns e2e-tests-daemonsets-40nfz with secrets found. (2.005191042s)
14:35:49 STEP: Waiting for a default service account to be provisioned in namespace
14:35:49 Oct 30 14:35:49.271: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-40nfz
14:35:49 Oct 30 14:35:49.273: INFO: Service account default in ns e2e-tests-daemonsets-40nfz with secrets found. (1.930327ms)
14:35:49 [BeforeEach] Daemon set
14:35:49   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:71
14:35:55 [It] should run and stop complex daemon
14:35:55   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
14:35:55 Oct 30 14:35:55.287: INFO: Creating daemon with a node selector daemon-set
14:35:55 STEP: Initially, daemon pods should not be running on any nodes.
14:35:57 Oct 30 14:35:57.297: INFO: nodesToPodCount: map[string]int{}
14:35:57 STEP: Change label of node, check that daemon pod is launched.
14:36:01 Oct 30 14:36:01.318: INFO: nodesToPodCount: map[string]int{}
14:36:03 Oct 30 14:36:03.347: INFO: nodesToPodCount: map[string]int{}
14:36:05 Oct 30 14:36:05.319: INFO: nodesToPodCount: map[string]int{}
14:36:07 Oct 30 14:36:07.319: INFO: nodesToPodCount: map[string]int{}
14:36:09 Oct 30 14:36:09.318: INFO: nodesToPodCount: map[string]int{}
14:36:11 Oct 30 14:36:11.318: INFO: nodesToPodCount: map[string]int{}
14:36:13 Oct 30 14:36:13.350: INFO: nodesToPodCount: map[string]int{}
14:36:15 Oct 30 14:36:15.318: INFO: nodesToPodCount: map[string]int{}
14:36:17 Oct 30 14:36:17.318: INFO: nodesToPodCount: map[string]int{}
14:36:19 Oct 30 14:36:19.318: INFO: nodesToPodCount: map[string]int{}
14:36:21 Oct 30 14:36:21.318: INFO: nodesToPodCount: map[string]int{}
14:36:23 Oct 30 14:36:23.347: INFO: nodesToPodCount: map[string]int{}
14:36:25 Oct 30 14:36:25.319: INFO: nodesToPodCount: map[string]int{}
14:36:27 Oct 30 14:36:27.318: INFO: nodesToPodCount: map[string]int{}
14:36:29 Oct 30 14:36:29.318: INFO: nodesToPodCount: map[string]int{}
14:36:31 Oct 30 14:36:31.350: INFO: nodesToPodCount: map[string]int{}
14:36:33 Oct 30 14:36:33.352: INFO: nodesToPodCount: map[string]int{}
14:36:35 Oct 30 14:36:35.318: INFO: nodesToPodCount: map[string]int{}
14:36:37 Oct 30 14:36:37.318: INFO: nodesToPodCount: map[string]int{}
14:36:39 Oct 30 14:36:39.318: INFO: nodesToPodCount: map[string]int{}
14:36:41 Oct 30 14:36:41.319: INFO: nodesToPodCount: map[string]int{}
14:36:43 Oct 30 14:36:43.318: INFO: nodesToPodCount: map[string]int{}
14:36:45 Oct 30 14:36:45.318: INFO: nodesToPodCount: map[string]int{}
14:36:47 Oct 30 14:36:47.319: INFO: nodesToPodCount: map[string]int{}
14:36:49 Oct 30 14:36:49.318: INFO: nodesToPodCount: map[string]int{}
14:36:51 Oct 30 14:36:51.318: INFO: nodesToPodCount: map[string]int{}
14:36:53 Oct 30 14:36:53.318: INFO: nodesToPodCount: map[string]int{}
14:36:55 Oct 30 14:36:55.318: INFO: nodesToPodCount: map[string]int{}
14:36:57 Oct 30 14:36:57.318: INFO: nodesToPodCount: map[string]int{}
14:36:59 Oct 30 14:36:59.319: INFO: nodesToPodCount: map[string]int{}
14:36:59 Oct 30 14:36:59.321: INFO: nodesToPodCount: map[string]int{}
14:36:59 [AfterEach] Daemon set
14:36:59   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:56
14:37:05 [AfterEach] Daemon set
14:37:05   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
14:37:05 STEP: Collecting events from namespace "e2e-tests-daemonsets-40nfz".
14:37:05 Oct 30 14:37:05.380: INFO: POD                                          NODE                   PHASE    GRACE  CONDITIONS
14:37:05 Oct 30 14:37:05.380: INFO: elasticsearch-logging-v1-jmu9g               e2e-flaky-minion-lojz  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:34:21 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: elasticsearch-logging-v1-z9x37               e2e-flaky-minion-ray5  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:54 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: fluentd-elasticsearch-e2e-flaky-minion-7zwb  e2e-flaky-minion-7zwb  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:35:48 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: fluentd-elasticsearch-e2e-flaky-minion-lojz  e2e-flaky-minion-lojz  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:14 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: fluentd-elasticsearch-e2e-flaky-minion-ray5  e2e-flaky-minion-ray5  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:19 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: heapster-v10-crc4j                           e2e-flaky-minion-lojz  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:34:03 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: kibana-logging-v1-e6e7p                      e2e-flaky-minion-ray5  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:34:19 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: kube-dns-v9-dzbwq                            e2e-flaky-minion-ray5  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:54 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: kube-ui-v3-hx16m                             e2e-flaky-minion-lojz  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:21 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: l7-lb-controller-nvr5o                       e2e-flaky-minion-ray5  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:53 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: monitoring-influxdb-grafana-v2-76koa         e2e-flaky-minion-lojz  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-30 14:28:42 -0700 PDT  }]
14:37:05 Oct 30 14:37:05.380: INFO: 
14:37:05 Oct 30 14:37:05.380: INFO: Waiting up to 1m0s for all nodes to be ready
14:37:05 Oct 30 14:37:05.384: INFO: Node e2e-flaky-minion-7zwb condition 1/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-30 14:35:08 -0700 PDT
14:37:05 Oct 30 14:37:05.384: INFO: Successfully found node e2e-flaky-minion-7zwb readiness to be true
14:37:05 Oct 30 14:37:05.384: INFO: Node e2e-flaky-minion-lojz condition 1/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-30 14:27:50 -0700 PDT
14:37:05 Oct 30 14:37:05.384: INFO: Successfully found node e2e-flaky-minion-lojz readiness to be true
14:37:05 Oct 30 14:37:05.384: INFO: Node e2e-flaky-minion-ray5 condition 1/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-30 14:27:54 -0700 PDT
14:37:05 Oct 30 14:37:05.384: INFO: Successfully found node e2e-flaky-minion-ray5 readiness to be true
14:37:05 STEP: Destroying namespace "e2e-tests-daemonsets-40nfz" for this suite.
14:37:10 
14:37:10 • Failure [83.142 seconds]
14:37:10 Daemon set
14:37:10 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:182
14:37:10   should run and stop complex daemon [It]
14:37:10   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
14:37:10 
14:37:10   error waiting for daemon pods to be running on new nodes
14:37:10   Expected error:
14:37:10       <*errors.errorString | 0xc20814d4d0>: {
14:37:10           s: "timed out waiting for the condition",
14:37:10       }
14:37:10       timed out waiting for the condition
14:37:10   not to have occurred
14:37:10 
14:37:10   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:170

Metadata

Assignees

Labels

kind/flakeCategorizes issue or PR as related to a flaky test.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions