-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make DaemonSet respect critical pods annotation when scheduling #42028
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few comments. Thanks for the PR!
@@ -741,6 +749,9 @@ func (dsc *DaemonSetsController) nodeShouldRunDaemonPod(node *v1.Node, ds *exten | |||
|
|||
// TODO: Move it to the predicates | |||
for _, c := range node.Status.Conditions { | |||
if critical { | |||
break | |||
} | |||
if c.Type == v1.NodeOutOfDisk && c.Status == v1.ConditionTrue { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not for this PR: There are other too that the DaemonSet should ideally respect - MemoryPressure
, DiskPressure
for example are some of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added TODO
predicateFails = append(predicateFails, reasons...) | ||
} | ||
} else { | ||
fit, reasons, err = predicates.GeneralPredicates(pod, nil, nodeInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also run a GeneralPredicates check and ignore all resource requirement failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thought about it but it's easier to manage predicates this way than whitelisting failures in DS controller
} | ||
|
||
// noncriticalPredicates are the predicates that only non-critical pods need | ||
func noncriticalPredicates(pod *v1.Pod, meta interface{}, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PodFitsHost
is essential and PodFitsHostPorts
might be essential too for now. The former is a basic requirement to admit pods and preemptions will not work around that.
The latter is a resource that kubelet does not support. I see it as a bug since Kubelet should ideally be prempting pods to free up host ports too. cc @dashpole
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved them to essential and added TODO for PodFitsHostPorts
1634a0c
to
e0fcc78
Compare
// If the pod is marked as critical and support for critical pod annotations is enabled, | ||
// check predicates for critical pods only. | ||
fit, reasons, err = predicates.EssentialPredicates(pod, nil, nodeInfo) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: The error handling logic seems to be the same for both the cases here. Why not combine them into one segment?
if critical {
fit, reasons, err = predicates.EssentialPredicates(...)
else {
fit, reasons, err = predicates.GeneralPredicates(...)
}
// Handle errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Done
one more nit. otherwise lgtm |
e0fcc78
to
8ffeb49
Compare
Fixed nit and squashed |
/lgtm |
@k8s-bot kops aws e2e test this |
Sigh; this PR shows why the critical pods thing is such an awful hack. I'm looking forward to getting rid of it ASAP. BTW in the future please try to get an approve from me or @timothysc for changes to the scheduler. We would have approved this change anyway, but it's easy to miss @ mentions so that's unfortunately the only way to ensure that between him and me we know what's going on. |
@davidopp apologies. I was trying to shepherd it through in time for v1.6 code freeze. |
No problem; that's why I said "in the future" |
lgtm but I hope we are going to proceed with #42002 and remove all this madness from the DS controller. |
Yes that's the plan |
@janetkuo thanks! |
8ffeb49
to
4c88247
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED The following people have approved this PR: janetkuo, k8s-merge-robot, vishh Needs approval from an approver in each of these OWNERS Files: We suggest the following people: |
Just rebased. Re-applying tag. |
Automatic merge from submit-queue (batch tested with PRs 41234, 42186, 41615, 42028, 41788) |
What this PR does / why we need it: #41612
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #41612Special notes for your reviewer:
Release note:
cc @kubernetes/sig-apps-feature-requests @erictune @vishh @liggitt @Kargakis @lukaszo @piosz @davidopp