-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixing array out of bound by checking initContainers instead of containers #58574
fixing array out of bound by checking initContainers instead of containers #58574
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
07cd678
to
1a14fa4
Compare
@@ -609,9 +609,9 @@ func podRequest(pod *v1.Pod, resourceName v1.ResourceName) resource.Quantity { | |||
for i := range pod.Spec.InitContainers { | |||
switch resourceName { | |||
case v1.ResourceMemory: | |||
containerValue.Add(*pod.Spec.Containers[i].Resources.Requests.Memory()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i believe you should use initValue.Add
and not containerValue.Add
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems that second commit didn't ship, fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
pkg/kubelet/eviction/helpers.go
Outdated
case resourceDisk: | ||
containerValue.Add(*pod.Spec.Containers[i].Resources.Requests.StorageEphemeral()) | ||
containerValue.Add(*pod.Spec.InitContainers[i].Resources.Requests.StorageEphemeral()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
1a14fa4
to
8285b6d
Compare
8285b6d
to
5110b6d
Compare
/ok-to-test |
/assign @dashpole |
/retest |
pkg/kubelet/eviction/helpers.go
Outdated
@@ -609,9 +609,9 @@ func podRequest(pod *v1.Pod, resourceName v1.ResourceName) resource.Quantity { | |||
for i := range pod.Spec.InitContainers { | |||
switch resourceName { | |||
case v1.ResourceMemory: | |||
containerValue.Add(*pod.Spec.Containers[i].Resources.Requests.Memory()) | |||
initValue.Add(*pod.Spec.InitContainers[i].Resources.Requests.Memory()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this logic right? since init containers run sequentially, I thought it was supposed to be max(max(initContainer), sum(container)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is the approach taken at
kubernetes/pkg/quota/evaluator/core/pods.go
Lines 276 to 317 in c9c6901
// PodUsageFunc returns the quota usage for a pod. | |
// A pod is charged for quota if the following are not true. | |
// - pod has a terminal phase (failed or succeeded) | |
// - pod has been marked for deletion and grace period has expired | |
func PodUsageFunc(obj runtime.Object, clock clock.Clock) (api.ResourceList, error) { | |
pod, err := toInternalPodOrError(obj) | |
if err != nil { | |
return api.ResourceList{}, err | |
} | |
// always quota the object count (even if the pod is end of life) | |
// object count quotas track all objects that are in storage. | |
// where "pods" tracks all pods that have not reached a terminal state, | |
// count/pods tracks all pods independent of state. | |
result := api.ResourceList{ | |
podObjectCountName: *(resource.NewQuantity(1, resource.DecimalSI)), | |
} | |
// by convention, we do not quota compute resources that have reached end-of life | |
// note: the "pods" resource is considered a compute resource since it is tied to life-cycle. | |
if !QuotaPod(pod, clock) { | |
return result, nil | |
} | |
requests := api.ResourceList{} | |
limits := api.ResourceList{} | |
// TODO: ideally, we have pod level requests and limits in the future. | |
for i := range pod.Spec.Containers { | |
requests = quota.Add(requests, pod.Spec.Containers[i].Resources.Requests) | |
limits = quota.Add(limits, pod.Spec.Containers[i].Resources.Limits) | |
} | |
// InitContainers are run sequentially before other containers start, so the highest | |
// init container resource is compared against the sum of app containers to determine | |
// the effective usage for both requests and limits. | |
for i := range pod.Spec.InitContainers { | |
requests = quota.Max(requests, pod.Spec.InitContainers[i].Resources.Requests) | |
limits = quota.Max(limits, pod.Spec.InitContainers[i].Resources.Limits) | |
} | |
result = quota.Add(result, podComputeUsageHelper(requests, limits)) | |
return result, nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt - Indeed, Since they're run sequentially, I think it might be the right logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should look like https://github.com/kubernetes/kubernetes/blob/master/pkg/api/resource/helpers.go#L51. @liggitt is correct above
5110b6d
to
3c54c42
Compare
9be6745
to
f1b7a92
Compare
f1b7a92
to
ed8e75a
Compare
/lgtm |
/assign @dchen1107 |
[MILESTONENOTIFIER] Milestone Pull Request Needs Approval @dashpole @dchen1107 @yastij @kubernetes/sig-node-misc Action required: This pull request must have the Pull Request Labels
|
this should be cherrypicked to 1.9 |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dashpole, dchen1107, yastij Associated issue: #58541 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
cc/ @mehdy for 1.9 branch manager for the cherrypicking. |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
Commit found in the "release-1.9" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked. |
What this PR does / why we need it:
Which issue(s) this PR fixes : Fixes #58541
Special notes for your reviewer:
Release note: