-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add local storage support in Quota #49610
Add local storage support in Quota #49610
Conversation
@NickrenREN: GitHub didn't allow me to request PR reviews from the following users: ddysher. Note that only kubernetes members can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @jingxu97 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change assumes EmptyDir.SizeLimit is a request for both requests and limits in terms of resource allocation. This is not what people would do to explicitly request storage.kubernetes.io/scratch
or storage.kubernets.io/overlay
, do we want to force the two resources to have the same request/limit as well?
pkg/quota/evaluator/core/pods.go
Outdated
@@ -115,10 +121,32 @@ func (p *podEvaluator) Constraints(required []api.ResourceName, item runtime.Obj | |||
requiredSet := quota.ToSet(required) | |||
missingSet := sets.NewString() | |||
for i := range pod.Spec.Containers { | |||
enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSet) | |||
if len(pod.Spec.Containers[i].VolumeMounts) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks a little verbose and duplicate for both containers and init container check; why can't you just have enforcePodContainerConstraints
takes care of checking volume mounts since you already passed container[i] into it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but we also need podVolume info, so add pod to the parameters
Done, thanks
pkg/quota/evaluator/core/pods.go
Outdated
} | ||
for i := range pod.Spec.InitContainers { | ||
enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSet) | ||
if len(pod.Spec.InitContainers[i].VolumeMounts) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the effect of having volumes but no volume mounts in Pod?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pr is confined to the container level
pkg/quota/evaluator/core/pods.go
Outdated
requests := container.Resources.Requests | ||
limits := container.Resources.Limits | ||
containerUsage := podUsageHelper(requests, limits) | ||
containerSet := quota.ToSet(quota.ResourceNames(containerUsage)) | ||
if !containerSet.Equal(requiredSet) { | ||
difference := requiredSet.Difference(containerSet) | ||
missingSet.Insert(difference.List()...) | ||
if podVolume == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we simplify the logic to?
if podVolume == nil || !checkStorageScratchLimitedByEmptyDir(podVolume, difference)
pkg/quota/evaluator/core/pods.go
Outdated
// checkStorageScratchLimitedByEmptyDir checks whether the missing resource is storage scratch and if it is set | ||
// by emptydir | ||
func checkStorageScratchLimitedByEmptyDir(podVolume *api.Volume, missingSet sets.String) bool { | ||
//missingSet must be subset of {api.ResourceStorageScratch,api.ResourceRequestsStorageScratch,api.ResourceLimitsStorageScratch} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add space after //
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
pkg/quota/evaluator/core/pods.go
Outdated
} | ||
// InitContainers are run sequentially before other containers start, so the highest | ||
// init container resource is compared against the sum of app containers to determine | ||
// the effective usage for both requests and limits. | ||
for i := range pod.Spec.InitContainers { | ||
requests = quota.Max(requests, pod.Spec.InitContainers[i].Resources.Requests) | ||
limits = quota.Max(limits, pod.Spec.InitContainers[i].Resources.Limits) | ||
for _, volumeMount := range pod.Spec.InitContainers[i].VolumeMounts { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't you compute request and limit first, before computing quota.Max?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, yes, we need to
pkg/quota/evaluator/core/pods.go
Outdated
@@ -245,13 +324,51 @@ func PodUsageFunc(obj runtime.Object) (api.ResourceList, error) { | |||
for i := range pod.Spec.Containers { | |||
requests = quota.Add(requests, pod.Spec.Containers[i].Resources.Requests) | |||
limits = quota.Add(limits, pod.Spec.Containers[i].Resources.Limits) | |||
for _, volumeMount := range pod.Spec.Containers[i].VolumeMounts { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a helper function to remove duplicated code for container and init container,
2771ba5
to
b163ad7
Compare
Hope that i have addressed all your comments. @ddysher Thanks |
pkg/quota/evaluator/core/pods.go
Outdated
if podVolume.Name == volumeMount.Name { | ||
if checkStorageScratchLimitedByEmptyDir(&podVolume, difference) { | ||
scratchLimitedByEmpty = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return directly would be better after scratchLimitedByEmpty = true
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thanks, done
b163ad7
to
9ee401c
Compare
/retest |
pkg/api/types.go
Outdated
@@ -3693,10 +3693,22 @@ const ( | |||
ResourceRequestsMemory ResourceName = "requests.memory" | |||
// Storage request, in bytes | |||
ResourceRequestsStorage ResourceName = "requests.storage" | |||
// Local Storage request for overlay filesystem, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) | |||
// The resource name for ResourceStorageOverlay is alpha and it can change across releases. | |||
ResourceRequestsStorageOverlay ResourceName = "requests.storage.kubernetes.io/overlay" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modify the comments to match the name
@NickrenREN Thank you for the PR. The code is ok, but we have to address the following issues before checking in the code.
I will open up an issue for discussion #49818 |
9ee401c
to
a7dd0dc
Compare
This issue for discussing this feature is opened up #49818 |
a7dd0dc
to
b35882a
Compare
feature gate added @jingxu97 |
b35882a
to
0ba804d
Compare
/cc @derekwaynecarr |
there was a meeting to potentially simplify local storage management to just have a single resource. i would like to hold this PR until the design is actually closed. |
0ba804d
to
4f655ab
Compare
4f655ab
to
9b32b80
Compare
/retest |
api.ResourceLimitsCPU, | ||
api.ResourceLimitsMemory, | ||
api.ResourceLimitsEphemeralStorage, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why there are two ResourceLimitsEphemeralStorage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, my bad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified
9b32b80
to
0caa4da
Compare
/lgtm |
/approve |
cc @dashpole |
cool, thanks @jingxu97 , |
0caa4da
to
bc393e3
Compare
this looks good to me now. /lgtm |
Cool, thanks @derekwaynecarr |
LGTM. I think I've requested changes before but do not know how to remove the request. |
/approve no-issue |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: NickrenREN, derekwaynecarr, jingxu97, thockin Associated issue requirement bypassed by: derekwaynecarr, thockin The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/retest |
Automatic merge from submit-queue (batch tested with PRs 50932, 49610, 51312, 51415, 50705) |
Add local storage(scratch, overlay) support in quota
Release note:
/cc @ddysher @jingxu97