Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change performance targets in roadmap #5923

Merged
merged 1 commit into from
Mar 27, 2015

Conversation

wojtek-t
Copy link
Member

Change performance targets in roadmap to be more customer-visible.

cc @thockin @brendanburns @davidopp @fgrzadkowski

@wojtek-t
Copy link
Member Author

To give a bit more context on that:
I think that from the user perspective scheduling time is not important - what users pay attention to is e2e startup time. In other words, I think that even if we would schedule all pods in milliseconds, but starting them would take, say 1 minute, this won't produce a great user experience.

Obviously, starting a pod involves loading an image, which we can't really control. but in my opinion, the goal should be something like that:
"99 %ile of e2e pod startup time with all its images preloaded is less than X seconds; linear time to number of nodes and pods"

@@ -58,7 +58,7 @@ clustered database or key-value store. We will target such workloads for our
- Status: in progress
3. Scale to 30-50 pods (1-2 containers each) per node (#4188)
- Status:
4. Scheduling throughput: 99% of scheduling decisions made in less than 1s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3954)
4. Scheduling throughput: 99% of end-to-end pod startup time with prepulled images is less than 5s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3952, #3954)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW - I'm not 100% convinced that 5 seconds is a good target - please let me know if you have better suggestion for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this the scheduling throughput if we include kubelet pulling the image and starting the containers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't be - updated. Thanks for pointing this out.

@ghost
Copy link

ghost commented Mar 25, 2015

i would suggest splitting into two separate performance targets (i.e. the original one, as well as the new proposed one).

@wojtek-t
Copy link
Member Author

Thanks Quinton - I agree this makes sense - updated.

@davidopp
Copy link
Member

LGTM

@thockin thockin added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 27, 2015
@thockin
Copy link
Member

thockin commented Mar 27, 2015

LGTM - you guys can commit in your daytime hours :)

wojtek-t added a commit that referenced this pull request Mar 27, 2015
Change performance targets in roadmap
@wojtek-t wojtek-t merged commit f488c3b into kubernetes:master Mar 27, 2015
@wojtek-t wojtek-t deleted the update_roadmap branch March 27, 2015 10:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants