-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node goes away e2e test #3520
Comments
The follow up to this is what happens when the node reboots and tries to On Thu, Jan 15, 2015 at 8:49 AM, Eric Tune notifications@github.com wrote:
|
This could also be accomplished in an integration test. |
As a variation of my serve_hostnames soak/reliability test I intend to make a "cauldron" version of it which uses a replication controller and which every so often kills or add pods and checks to make sure the expected number of pods are up over a given window. Would that meet the requirements of the issue? |
/cc @gmarek |
@satnam6502 That test would certainly be better than nothing. I'm of the opinion that focused, standalone e2e tests have a lot of value too, and if it were up to me, I'd wrote one of those before embedding the test into a test with a larger scope (reliability). Basically, I think each test should have a purpose which can be described in like one sentence, without use of conjunctions. But that's just my opinion. |
Agreed, coherent focused e2e tests are of great value. I can take this one unless someone else is super keen to do it. Back to back meetings today in Seattle but I expect I can have it done on Tuesday/Wednesday if that's not too late for you. |
not super keen. |
Un-assiging temporarily while I look at issues with our network e2e test. No worries if someone else wants to pick this up before I can get back to it. |
@jszczepkowski - does your restart tests cover this? |
This exact test case is covered by Nodes.Resize and Nodes.Network. Closing |
Write an e2e test that has a pod and a replication controller and multiple nodes. Delete the node that the pod is on, and see that the absence of the pod/node is detected by the replication controller, and that a replacement pod is created.
The text was updated successfully, but these errors were encountered: