Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite} #43335

Closed
k8s-github-robot opened this issue Mar 18, 2017 · 9 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/6958/
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 sig/node Categorizes an issue or PR as relevant to SIG Node. labels Mar 18, 2017
@calebamiles calebamiles modified the milestone: v1.6 Mar 18, 2017
@marun
Copy link
Contributor

marun commented Mar 18, 2017

This seems pretty rare - ~20 test runs without issue since this occurrence.

@ncdc
Copy link
Member

ncdc commented Mar 20, 2017

This is not a new failure. We've had flakes related to downward api (and config map and other volumes) for a while. Something is keeping the volume update from happening:

I0317 16:55:34.129] Mar 17 16:55:23.321: INFO: At 2017-03-17 16:53:21 -0700 PDT - event for labelsupdatee5335fca-0b6c-11e7-961f-0242ac110003: {kubelet bootstrap-e2e-minion-group-gtzm} FailedMount: MountVolume.SetUp failed for volume "kubernetes.io/downward-api/e533cae5-0b6c-11e7-8ca3-42010af00002-podinfo" (spec.Name: "podinfo") pod "e533cae5-0b6c-11e7-8ca3-42010af00002" (UID: "e533cae5-0b6c-11e7-8ca3-42010af00002") with: remove /var/lib/kubelet/pods/e533cae5-0b6c-11e7-8ca3-42010af00002/volumes/kubernetes.io~downward-api/podinfo/resolv.conf: device or resource busy

cc @aveshagarwal @derekwaynecarr @pmorie @sjenning

@ncdc
Copy link
Member

ncdc commented Mar 20, 2017

I don't think this is necessarily a 1.6 blocker

@timothysc
Copy link
Member

I don't think this is necessarily a 1.6 blocker

+1

The test looks to be doing the proper thing and looping at the right interval (2-seconds) , but I don't know what gaurentees we can make on the logging subsystem which it depends on. Also there is no check to determine that the node has received the update. I'm not intimately familiar with the process it follows to update pod labels, but imo we're missing a event-watch in here that indicates that it's been received.

@timothysc timothysc modified the milestones: next-candidate, v1.6 Mar 20, 2017
@ncdc
Copy link
Member

ncdc commented Mar 20, 2017

@timothysc I actually think the problem has something to do with mounts leaking across containers, or files staying open when they shouldn't be, or trying to remove files it shouldn't be. I don't think we're missing a watch.

@timothysc
Copy link
Member

@ncdc but there is no check in the test to ensure we've reached that point...

@ncdc
Copy link
Member

ncdc commented Mar 20, 2017

@timothysc fair enough. However, every time I've seen this flake and looked at the logs, there was a failure related to the pod's volumes, which makes me think the kubelet got the update just fine, and some other machinery was failing.

@spiffxp
Copy link
Member

spiffxp commented Jun 19, 2017

/remove-priority P2
/priority backlog
(I'm not actually sure this is the right priority, just trying to remove the old priority/PN labels)

/assign
closing due to inactivity

@k8s-ci-robot k8s-ci-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/P2 labels Jun 19, 2017
@spiffxp
Copy link
Member

spiffxp commented Jun 19, 2017

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

7 participants