-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a feature to the scheduler to score fewer than all nodes in every scheduling cycle #66733
Conversation
[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process Pull Request Labels
|
allNodes := int32(g.cache.NodeTree().NumNodes) | ||
numNodesToFind := g.numFeasibleNodesToFind(allNodes) | ||
numNodesProcessed := int32(0) | ||
for numNodesProcessed < allNodes { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not do it all at once until filteredLen >= numNodesToFind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
once you send work to parallelize
you cannot stop in the middle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a parameter to parallelize to control is a solution, or just redefining an interface to implement this functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parallelize is a part of a client-go library. We cannot change its parameters, but I agree that having another Parallelize function, for example, ParallelizeUntil(..., condition) would be useful. That should be done as a separate PR though. Do you think you can add one that can be used here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, when this PR is merged I will help implement it. :)
4f4e26c
to
3875e5b
Compare
@@ -45,6 +45,10 @@ import ( | |||
"k8s.io/kubernetes/pkg/scheduler/volumebinder" | |||
) | |||
|
|||
const ( | |||
minFeasibleNodesToFind = 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it 20, I think we need to have relevant comments to explain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
20 is an arbitrary value. I added a comment to explain.
@@ -336,6 +341,20 @@ func (g *genericScheduler) getLowerPriorityNominatedPods(pod *v1.Pod, nodeName s | |||
return lowerPriorityPods | |||
} | |||
|
|||
// numFeasibleNodesToFind returns the number of feasible nodes that once found, the scheduler stops | |||
// its search for more feasible nodes. | |||
func (g *genericScheduler) numFeasibleNodesToFind(allNodes int32) int32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numAllNodes will be better.
@bsalamat does this need/have a PR against the 1.12 docs branch? Looks like a major change. Thanks! |
@@ -68,6 +67,8 @@ func (o *DeprecatedOptions) AddFlags(fs *pflag.FlagSet, cfg *componentconfig.Kub | |||
fs.MarkDeprecated("hard-pod-affinity-symmetric-weight", "This option was moved to the policy configuration file") | |||
fs.StringVar(&cfg.FailureDomains, "failure-domains", cfg.FailureDomains, "Indicate the \"all topologies\" set for an empty topologyKey when it's used for PreferredDuringScheduling pod anti-affinity.") | |||
fs.MarkDeprecated("failure-domains", "Doesn't have any effect. Will be removed in future version.") | |||
fs.Int32Var(&cfg.PercentageOfNodesToScore, "percentage-of-nodes-to-score", cfg.PercentageOfNodesToScore, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not add anything to the deprecated flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the componentconfig is still alpha, these options are not quite "deprecated"! 😉
Anyway, I removed it.
One comment about the flag. Otherwise sgtm. |
@jimangel Yes, I will write/update docs. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bsalamat, fejta, k82cn, sttts The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
New changes are detected. LGTM label has been removed. |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. Not split nodes when searching for nodes but doing it all at once **What this PR does / why we need it**: Not split nodes when searching for nodes but doing it all at once. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: @bsalamat This is a follow up PR of #66733. #66733 (comment) **Release note**: ```release-note Not split nodes when searching for nodes but doing it all at once. ```
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. Not split nodes when searching for nodes but doing it all at once **What this PR does / why we need it**: Not split nodes when searching for nodes but doing it all at once. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: @bsalamat This is a follow up PR of #66733. kubernetes/kubernetes#66733 (comment) **Release note**: ```release-note Not split nodes when searching for nodes but doing it all at once. ``` Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. Not split nodes when searching for nodes but doing it all at once **What this PR does / why we need it**: Not split nodes when searching for nodes but doing it all at once. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: @bsalamat This is a follow up PR of #66733. kubernetes/kubernetes#66733 (comment) **Release note**: ```release-note Not split nodes when searching for nodes but doing it all at once. ``` Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. Not split nodes when searching for nodes but doing it all at once **What this PR does / why we need it**: Not split nodes when searching for nodes but doing it all at once. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: @bsalamat This is a follow up PR of #66733. kubernetes/kubernetes#66733 (comment) **Release note**: ```release-note Not split nodes when searching for nodes but doing it all at once. ``` Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. Not split nodes when searching for nodes but doing it all at once **What this PR does / why we need it**: Not split nodes when searching for nodes but doing it all at once. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: @bsalamat This is a follow up PR of #66733. kubernetes/kubernetes#66733 (comment) **Release note**: ```release-note Not split nodes when searching for nodes but doing it all at once. ``` Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
What this PR does / why we need it:
Today, the scheduler scores all the nodes in the cluster in every scheduling cycle (every time a posd is attempted). This feature implements a mechanism in the scheduler that allows scoring fewer than all nodes in the cluster. The scheduler stops searching for more nodes once the configured number of feasible nodes are found. This can help improve the scheduler's performance in large clusters (several hundred nodes and larger).
This PR also adds a new structure to the scheduler's cache, called NodeTree, that allows scheduler to iterate over various nodes in different zones in a cluster. This is needed to avoid scoring the same set of nodes in every scheduling cycle.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #66627
Special notes for your reviewer:
This is a large PR, but broken into a few logical commits. Reviewing would be easier if you review by commits.
Release note: