-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only schedule to pods that are available. This turns on the node #5725
Only schedule to pods that are available. This turns on the node #5725
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/. If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check the information on your CLA or see this help article on setting the email on your git commits. Once you've done that, please reply here to let us know. If you signed the CLA as a corporation, please let us know the company's name. |
e127217
to
4c4ae9b
Compare
@ddysher Any chance you can take a look? |
@@ -77,7 +77,7 @@ func NewCMServer() *CMServer { | |||
NodeMilliCPU: 1000, | |||
NodeMemory: resource.MustParse("3Gi"), | |||
SyncNodeList: true, | |||
SyncNodeStatus: false, | |||
SyncNodeStatus: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why change the default value? we are not syncing node status in controller manager now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to prevent scheduling to nodes that can not run pods there needs to be a constant monitoring and updating of all node statuses. If the node goes down, gets rebooted, or even re-purposed then that node isn't available to run pods. The only way that I see in the current implementation where that is noticed is with the node controller pinging the node and updating the status for the scheduler.
A better way would probably be for the apiserver to require an update only be so old for it to be valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #5399, which should address you concern. Enabling sync status will mess up the cluster, since it operates in two different modes (node controller pull status vs kubelet push status)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that will do exactly what I was trying to accomplish. I'll re-push and remove the node controller updating change
4c4ae9b
to
cbf0541
Compare
cbf0541
to
c2938b2
Compare
LGTM |
LGTM, thx |
Only schedule to pods that are available. This turns on the node
#5545