Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only schedule to pods that are available. This turns on the node #5725

Merged
merged 1 commit into from
Mar 27, 2015

Conversation

rrati
Copy link

@rrati rrati commented Mar 20, 2015

@googlebot
Copy link

Thanks for your pull request.

It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check the information on your CLA or see this help article on setting the email on your git commits.

Once you've done that, please reply here to let us know. If you signed the CLA as a corporation, please let us know the company's name.

@rrati rrati force-pushed the schedule-to-available-nodes-5545 branch 2 times, most recently from e127217 to 4c4ae9b Compare March 24, 2015 17:45
@rrati
Copy link
Author

rrati commented Mar 25, 2015

@ddysher Any chance you can take a look?

@@ -77,7 +77,7 @@ func NewCMServer() *CMServer {
NodeMilliCPU: 1000,
NodeMemory: resource.MustParse("3Gi"),
SyncNodeList: true,
SyncNodeStatus: false,
SyncNodeStatus: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change the default value? we are not syncing node status in controller manager now.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to prevent scheduling to nodes that can not run pods there needs to be a constant monitoring and updating of all node statuses. If the node goes down, gets rebooted, or even re-purposed then that node isn't available to run pods. The only way that I see in the current implementation where that is noticed is with the node controller pinging the node and updating the status for the scheduler.

A better way would probably be for the apiserver to require an update only be so old for it to be valid.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #5399, which should address you concern. Enabling sync status will mess up the cluster, since it operates in two different modes (node controller pull status vs kubelet push status)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that will do exactly what I was trying to accomplish. I'll re-push and remove the node controller updating change

@rrati rrati force-pushed the schedule-to-available-nodes-5545 branch from 4c4ae9b to cbf0541 Compare March 25, 2015 19:24
@rrati rrati force-pushed the schedule-to-available-nodes-5545 branch from cbf0541 to c2938b2 Compare March 26, 2015 12:42
@lavalamp
Copy link
Member

LGTM

@ddysher
Copy link
Contributor

ddysher commented Mar 27, 2015

LGTM, thx

@ddysher ddysher added lgtm "Looks good to me", indicates that a PR is ready to be merged. cla: yes and removed cla: no labels Mar 27, 2015
piosz added a commit that referenced this pull request Mar 27, 2015
Only schedule to pods that are available.  This turns on the node
@piosz piosz merged commit 8d94c43 into kubernetes:master Mar 27, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants