-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: update scheduling queue with options #81263
feat: update scheduling queue with options #81263
Conversation
/retest
Regards,
Draven
…On Aug 11, 2019, 8:59 PM +0800, Kubernetes Prow Robot ***@***.***>, wrote:
@draveness: The following tests failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
pull-kubernetes-integration
4b19c39
link
/test pull-kubernetes-integration
pull-kubernetes-e2e-gce
4b19c39
link
/test pull-kubernetes-e2e-gce
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
/lgtm Just be more accurate with the PR description: Also please add a Release note, as this offers a new configuration option. |
This is an internal package which does not require a release note.
Regards,
Draven
…On Aug 13, 2019, 12:53 AM +0800, Aldo Culquicondor ***@***.***>, wrote:
/lgtm
Just be more accurate with the PR description: feat: add options to scheduling queue reads better.
Also please add a Release note, as this offers a new configuration option.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if i miss sth
Add pod max backoff duration to the scheduler API would cause an API review, we could do this in a follow-up PR if the implementation here makes sense.
…On Aug 13, 2019, 3:11 PM +0800, Yuquan Ren ***@***.***>, wrote:
@NickrenREN commented on this pull request.
In pkg/scheduler/internal/queue/scheduling_queue.go:
> @@ -160,12 +188,16 @@ func activeQComp(podInfo1, podInfo2 interface{}) bool {
}
// NewPriorityQueue creates a PriorityQueue object.
-func NewPriorityQueue(stop <-chan struct{}, fwk framework.Framework) *PriorityQueue {
- return NewPriorityQueueWithClock(stop, util.RealClock{}, fwk)
-}
+func NewPriorityQueue(
I mean how can i set the max duration value, any flag for that ? or add it somewhere ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Why do we need a follow-up PR ? why not add that in this PR since they are one fix? |
As @draveness mentioned, that change requires an API review. Possibly a KEP. It shouldn't take too long though. |
/assign @Huang-Wei |
Friendly ping @Huang-Wei for review, thanks! |
I still prefer combining them together just like what @everpeace does in his PR. Otherwise, i do not know if this PR is meaningful. |
Otherwise, i do not know if this PR is meaningful.
Use the option pattern could remove initializers like `NewPriorityQueueWithXXX`.
I still prefer combining them together just like ***@***.*** does in his PR.
I don’t have a preference on this, and I’ll update the API later in this PR.
Regards,
Draven
…On Aug 21, 2019, 2:07 PM +0800, Yuquan Ren ***@***.***>, wrote:
@NickrenREN commented on this pull request.
In pkg/scheduler/internal/queue/scheduling_queue.go:
> stop: stop,
- podBackoff: NewPodBackoffMap(1*time.Second, 10*time.Second),
+ podBackoff: NewPodBackoffMap(1*time.Second, options.podMaxBackoffDuration),
If we just want to make the backoff policy flexible, maybe the way to calculate backoff duration is also need to be configurable...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I agree this. As I mentioned in #81263 (comment), initial backoff duration is a one of key parameters for tuning starvation in the scheduler. So I would be glad if you supported the initial backoff duration in the PR. If you don't mind, I can send a PR to your branch to support this. |
Kindly ping @liggitt for approval |
00603d4
to
0a831cf
Compare
/retest |
/retest
…On Oct 4, 2019, 9:54 PM +0800, Kubernetes Prow Robot ***@***.***>, wrote:
@draveness: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
pull-kubernetes-e2e-gce-100-performance
0a831cf
link
/test pull-kubernetes-e2e-gce-100-performance
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
/approve will leave final lgtm to scheduler reviewers after rebase is done |
pkg/scheduler/apis/config/types.go
Outdated
// PodInitialBackoffSeconds is the initial backoff for unschedulable pods. | ||
// If specified, it must be greater than 0. If this value is null, the default value (1s) | ||
// will be used. | ||
PodInitialBackoffSeconds *int64 `json:"podInitialBackoffSeconds"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt Internal version shouldn't have "json:..."
tag, right?
(not sure it's mandatory or optional, but I notice most internal types.go don't have the tag)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, true. good catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, this was removed
0a831cf
to
4c38d90
Compare
4c38d90
to
f67b516
Compare
Hi @ahg-g @Huang-Wei, all the comments have been addressed and the API review has completed, please take another look. |
thanks, please squash. |
f67b516
to
9646afb
Compare
done PTAL |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, draveness, liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
It still needs an LGTM, :) |
/lgtm |
…eduling-queue-with-options feat: update scheduling queue with options
/kind feature
/sig scheduling
/assign @bsalamat @NickrenREN
What this PR does / why we need it:
Use priorityQueueOptions to refactor the priority queue which could make it easier to do configuration.
Which issue(s) this PR fixes:
ref: #81214
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: