Closed
Description
I0524 18:20:26.486302 2069 e2e_node_suite_test.go:67] Pre-pulling images so that they are cached for the tests.
W0524 18:20:41.679642 2069 container_list.go:56] Could not pre-pull image %s %v output: %sgcr.io/google_containers/pause-amd64:3.0exit status 1 [69 114 114 111 114 32 114 101 115 112 111 110 115 101 32 102 114 111 109 32 100 97 101 109 111 110 58 32 71 101 116 32 104 116 116 112 115 58 47 47 103 99 114 46 105 111 47 118 50 47 103 111 111 103 108 101 95 99 111 110 116 97 105 110 101 114 115 47 112 97 117 115 101 45 97 109 100 54 52 47 109 97 110 105 102 101 115 116 115 47 51 46 48 58 32 71 101 116 32 104 116 116 112 115 58 47 47 103 99 114 46 105 111 47 118 50 47 116 111 107 101 110 63 115 99 111 112 101 61 114 101 112 111 115 105 116 111 114 121 37 51 65 103 111 111 103 108 101 95 99 111 110 116 97 105 110 101 114 115 37 50 70 112 97 117 115 101 45 97 109 100 54 52 37 51 65 112 117 108 108 38 115 101 114 118 105 99 101 61 103 99 114 46 105 111 58 32 110 101 116 47 104 116 116 112 58 32 114 101 113 117 101 115 116 32 99 97 110 99 101 108 101 100 32 119 104 105 108 101 32 119 97 105 116 105 110 103 32 102 111 114 32 99 111 110 110 101 99 116 105 111 110 32 40 67 108 105 101 110 116 46 84 105 109 101 111 117 116 32 101 120 99 101 101 100 101 100 32 119 104 105 108 101 32 97 119 97 105 116 105 110 103 32 104 101 97 100 101 114 115 41 10]
@ncdc translated the string part:
Error response from daemon: Get https://gcr.io/v2/google_containers/pause-amd64/manifests/3.0: Get https://gcr.io/v2/token?scope=repository%3Agoogle_containers%2Fpause-amd64%3Apull&service=gcr.io: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Since the node e2e test uses "docker pull" command directly, so there may be a docker cli operation timeout.
Activity
dchen1107 commentedon May 24, 2016
Is this part of runtime conformance tests?
Random-Liu commentedon May 24, 2016
@dchen1107 No. This happens during prepulling all node e2e images. Ref #25944.
dchen1107 commentedon May 24, 2016
Ok, since it is a best-effort thing to help with deflake the test failures caused by slow pulling, can we either add retry logic or at least not fail the entire test suite?
cc/ @pwittrock
pwittrock commentedon May 24, 2016
Retrying pre-pulling failed images SGTM. Maybe 5 times each?
pwittrock commentedon May 24, 2016
It is nice that the test failure reason is obvious :)
janetkuo commentedon May 25, 2016
dial tcp 74.125.202.82:443: i/o timeout
when pulling images:https://storage.cloud.google.com/kubernetes-jenkins/pr-logs/pull/26312/node-pull-build-e2e-test/7927/build-log.txt
Merge pull request #26321 from vishh/retry-pre-pull
5 remaining items