-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes-e2e-aws failing to start cluster #18037
Comments
Also related, there is a case in |
Ping on this. Cluster up is succeeding now, but the e2e test driver is timing out waiting for kube-system pods to start running. |
any update on this? |
It looks like both aws and aws-1.1 are running tests, with one failing consistently on aws and several failing on aws-1.1. |
There is one test failing now, same place 3x in a row. @brendandburns you might be interested.
|
Assigning to @justinsb since I believe he is fixing this stuff. Ping back if not true. |
Thank you. I'm pretty sure this is fixed (I'm running the e2e tests locally using a simulate-Jenkins hack), and they are currently all green. But once we get the pending PRs merged I will verify / investigate whether there is something wrong with Jenkins! |
Jenkins ran AWS tests on 3/1 and 2/29 |
The cluster successfully comes up, but the two tests: On release-1.1 branch there are lots of tests failing, but I don't think that's a priority. |
@spxtr those are failing on 1.2? The default SSH username changed to "admin" if you're using jessie, which is now the default if you don't set KUBE_OS_DISTRIBUTION. So KUBE_SSH_USER needs to be changed from ubuntu -> jessie. I can file a PR for that. I think service up and down is a flake, but I've seen it come and go also. I would hope 1.1 should pass tests, but that shouldn't be a priority over 1.2. I propose we close this and open 2 issues: default SSH username change, and 1.1 e2e being not happy. And then if we see the service up and down again we open it too... |
SGTM |
Opened those two issues; closing this one. |
kubernetes-e2e-aws has been running daily for about a week and failing each time after trying to check for salt-master repeatedly:
Once this is green, should it be moved to critical builds?
The text was updated successfully, but these errors were encountered: