Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add local storage support in Quota #49610

Merged
merged 2 commits into from
Aug 29, 2017

Conversation

NickrenREN
Copy link
Contributor

@NickrenREN NickrenREN commented Jul 26, 2017

Add local storage(scratch, overlay) support in quota

Release note:

Add local ephemeral storage support to Quota

/cc @ddysher @jingxu97

@k8s-ci-robot
Copy link
Contributor

@NickrenREN: GitHub didn't allow me to request PR reviews from the following users: ddysher.

Note that only kubernetes members can review this PR, and authors cannot review their own PRs.

In response to this:

Add local storage(scratch, overlay) support in quota

Release note:

Add local storage(scratch, overlay) support in Quota

/cc @ddysher @jingxu97

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot requested a review from jingxu97 July 26, 2017 04:03
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jul 26, 2017
@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note Denotes a PR that will be considered when it comes time to generate release notes. labels Jul 26, 2017
@NickrenREN
Copy link
Contributor Author

/assign @jingxu97

Copy link
Contributor

@ddysher ddysher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change assumes EmptyDir.SizeLimit is a request for both requests and limits in terms of resource allocation. This is not what people would do to explicitly request storage.kubernetes.io/scratch or storage.kubernets.io/overlay, do we want to force the two resources to have the same request/limit as well?

@@ -115,10 +121,32 @@ func (p *podEvaluator) Constraints(required []api.ResourceName, item runtime.Obj
requiredSet := quota.ToSet(required)
missingSet := sets.NewString()
for i := range pod.Spec.Containers {
enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSet)
if len(pod.Spec.Containers[i].VolumeMounts) == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a little verbose and duplicate for both containers and init container check; why can't you just have enforcePodContainerConstraints takes care of checking volume mounts since you already passed container[i] into it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but we also need podVolume info, so add pod to the parameters
Done, thanks

}
for i := range pod.Spec.InitContainers {
enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSet)
if len(pod.Spec.InitContainers[i].VolumeMounts) == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the effect of having volumes but no volume mounts in Pod?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pr is confined to the container level

requests := container.Resources.Requests
limits := container.Resources.Limits
containerUsage := podUsageHelper(requests, limits)
containerSet := quota.ToSet(quota.ResourceNames(containerUsage))
if !containerSet.Equal(requiredSet) {
difference := requiredSet.Difference(containerSet)
missingSet.Insert(difference.List()...)
if podVolume == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we simplify the logic to?

if podVolume == nil || !checkStorageScratchLimitedByEmptyDir(podVolume, difference)

// checkStorageScratchLimitedByEmptyDir checks whether the missing resource is storage scratch and if it is set
// by emptydir
func checkStorageScratchLimitedByEmptyDir(podVolume *api.Volume, missingSet sets.String) bool {
//missingSet must be subset of {api.ResourceStorageScratch,api.ResourceRequestsStorageScratch,api.ResourceLimitsStorageScratch}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add space after //

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

}
// InitContainers are run sequentially before other containers start, so the highest
// init container resource is compared against the sum of app containers to determine
// the effective usage for both requests and limits.
for i := range pod.Spec.InitContainers {
requests = quota.Max(requests, pod.Spec.InitContainers[i].Resources.Requests)
limits = quota.Max(limits, pod.Spec.InitContainers[i].Resources.Limits)
for _, volumeMount := range pod.Spec.InitContainers[i].VolumeMounts {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't you compute request and limit first, before computing quota.Max?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, yes, we need to

@@ -245,13 +324,51 @@ func PodUsageFunc(obj runtime.Object) (api.ResourceList, error) {
for i := range pod.Spec.Containers {
requests = quota.Add(requests, pod.Spec.Containers[i].Resources.Requests)
limits = quota.Add(limits, pod.Spec.Containers[i].Resources.Limits)
for _, volumeMount := range pod.Spec.Containers[i].VolumeMounts {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a helper function to remove duplicated code for container and init container,

@NickrenREN
Copy link
Contributor Author

NickrenREN commented Jul 26, 2017

Hope that i have addressed all your comments. @ddysher Thanks
ready for another review

if podVolume.Name == volumeMount.Name {
if checkStorageScratchLimitedByEmptyDir(&podVolume, difference) {
scratchLimitedByEmpty = true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return directly would be better after scratchLimitedByEmpty = true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, thanks, done

@guangxuli
Copy link
Contributor

/retest

pkg/api/types.go Outdated
@@ -3693,10 +3693,22 @@ const (
ResourceRequestsMemory ResourceName = "requests.memory"
// Storage request, in bytes
ResourceRequestsStorage ResourceName = "requests.storage"
// Local Storage request for overlay filesystem, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024)
// The resource name for ResourceStorageOverlay is alpha and it can change across releases.
ResourceRequestsStorageOverlay ResourceName = "requests.storage.kubernetes.io/overlay"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modify the comments to match the name

@jingxu97
Copy link
Contributor

jingxu97 commented Jul 27, 2017

@NickrenREN Thank you for the PR. The code is ok, but we have to address the following issues before checking in the code.

  1. Since we are adding the new alpha API, we have to use feature gate to validate whether those APIs are enabled or not.
  2. For quota resource like CPU and memory, the resource and request/limit is one to one mapping. But local storage is different. The scratch space is node-level resource that could be used by pod level emptyDir volume or by container's overlay (depending whether there is one root partition or a separate imagefs partition). So it is kind of confusing when we set quota on Scratch space. Should it only include emptyDir volume or also container's overlay? Do admins want fine grain control over emptyDir and container overlay on Quota or they only care about total local storage usage and set one quota for it?

I will open up an issue for discussion #49818

@jingxu97
Copy link
Contributor

This issue for discussing this feature is opened up #49818

@saad-ali
Copy link
Member

saad-ali commented Aug 3, 2017

+@vishh

@NickrenREN
Copy link
Contributor Author

feature gate added @jingxu97

@timothysc
Copy link
Member

/cc @derekwaynecarr

@derekwaynecarr
Copy link
Member

there was a meeting to potentially simplify local storage management to just have a single resource.

i would like to hold this PR until the design is actually closed.

@derekwaynecarr derekwaynecarr self-assigned this Aug 15, 2017
@derekwaynecarr derekwaynecarr added the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Aug 15, 2017
@k8s-github-robot k8s-github-robot removed the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Aug 24, 2017
@NickrenREN
Copy link
Contributor Author

/retest

api.ResourceLimitsCPU,
api.ResourceLimitsMemory,
api.ResourceLimitsEphemeralStorage,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why there are two ResourceLimitsEphemeralStorage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, my bad

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified

@jingxu97
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 25, 2017
@jingxu97
Copy link
Contributor

/approve

@jingxu97 jingxu97 requested a review from dashpole August 25, 2017 22:40
@jingxu97
Copy link
Contributor

cc @dashpole

@NickrenREN
Copy link
Contributor Author

cool, thanks @jingxu97 ,
In order to make checks turn green, i add ephemeral storage resource to standard resource sets in this pr. This may conflict with eviction pr.
When eviction pr gets merged, i will rebase this and ping you again.

@k8s-github-robot k8s-github-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 26, 2017
@NickrenREN
Copy link
Contributor Author

rebased. kindly ping @jingxu97 @dashpole for review

@derekwaynecarr derekwaynecarr removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Aug 26, 2017
@derekwaynecarr
Copy link
Member

this looks good to me now.

/lgtm
/approve no-issue

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 26, 2017
@derekwaynecarr derekwaynecarr added this to the v1.8 milestone Aug 26, 2017
@NickrenREN
Copy link
Contributor Author

Cool, thanks @derekwaynecarr

@ddysher
Copy link
Contributor

ddysher commented Aug 28, 2017

LGTM. I think I've requested changes before but do not know how to remove the request.

@thockin
Copy link
Member

thockin commented Aug 29, 2017

/approve no-issue

@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: NickrenREN, derekwaynecarr, jingxu97, thockin

Associated issue requirement bypassed by: derekwaynecarr, thockin

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 29, 2017
@NickrenREN
Copy link
Contributor Author

/retest

@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 50932, 49610, 51312, 51415, 50705)

@k8s-github-robot k8s-github-robot merged commit fe365b8 into kubernetes:master Aug 29, 2017
@NickrenREN NickrenREN deleted the local-isolation branch August 29, 2017 08:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.