Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed issue with the density test failing after a successful run because... #5387

Merged
merged 1 commit into from
Mar 12, 2015

Conversation

rrati
Copy link

@rrati rrati commented Mar 12, 2015

... of

a failure to cleanup #5385

@googlebot
Copy link

Thanks for your pull request.

It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check the information on your CLA or see this help article on setting the email on your git commits.

Once you've done that, please reply here to let us know. If you signed the CLA as a corporation, please let us know the company's name.

@a-robinson a-robinson self-assigned this Mar 12, 2015
@a-robinson
Copy link
Contributor

LGTM, waiting on CI to go green

// isn't 0. This means the controller wasn't cleaned up
// during the test so clean it up here
rc, err := c.ReplicationControllers(ns).Get(RCName)
if err == nil && rc.Spec.Replicas != 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could leak RCs/pods if there's an error communicating with the apiserver -- is that something that could break the other tests?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's a problem communicating with the apiserver then there would likely be an rc and its pods still in the system. That could definitely impact other tests. I'm not sure there's much we can do though. If the apiserver is unresponsive I'm not sure how we clean up after the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retries would be the typical answer, but I haven't checked how much we use retries in our e2e tests. If you give a request 3 tries to succeed instead of 1, flakes are less likely.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A retry might work but without a real world case to test against finding how many attempts and how long to wait between, or even if it will help, are all guesses. I can add retries but I wouldn't be able to verify they'd actually solve a problem. I'm also a little reluctant to cover up a communication problem. The apiserver is going to have to be responsive under load.

I don't recall there being a lot of retries for operations in the e2e tests. That is also probably because not many would stress the system to the point of causing communication timeouts like this test suite could.

ATM this test is disabled and won't be run unless explicitly enabled because of the nature of the test. This really belongs in a performance test suite.

@satnam6502 satnam6502 added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 12, 2015
satnam6502 added a commit that referenced this pull request Mar 12, 2015
Fixed issue with the density test failing after a successful run because...
@satnam6502 satnam6502 merged commit 6a0bfd7 into kubernetes:master Mar 12, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants