-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kubelet] Improving QOS in kubelet by introducing QoS level Cgroups - --cgroups-per-qos
#27853
[Kubelet] Improving QOS in kubelet by introducing QoS level Cgroups - --cgroups-per-qos
#27853
Conversation
} | ||
|
||
// QOSContainersInfo hold the names of containers per qos | ||
type QOSContainersInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Why is this required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that this would be good have. Once the top level QOS containers are created they won't change. So we can store there names in the cm and it can be used for getting the absolute names of the pod containers.
I added a few comments. Ping me back once those comments are addressed |
274b013
to
37093d9
Compare
@vishh I added the e2e test, but I am not sure how to access the host node cgroup fs information from inside the container. Is there a hack to achieve that? I have addressed all your other comments except changing the default of cgroup-root to "/". |
You can mount in On Fri, Jun 24, 2016 at 6:07 AM, Buddha Prakash notifications@github.com
|
// Enable Qos based Cgroup hieracrchy: top level cgroups for Qos Classes | ||
// And all pods are broguht up under a top level Qos cgroup | ||
// based on the Qos class they belong to. | ||
EnableQosCgroups bool `json:"enablePodCgroups,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see mismatch between var name and json.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack! Thanks!
37093d9
to
7e6241f
Compare
db4b9c6
to
107f2f5
Compare
cb4c129
to
227f9b9
Compare
227f9b9
to
e7183f2
Compare
e7183f2
to
d0ee9ca
Compare
d0ee9ca
to
012385f
Compare
@@ -1,300 +0,0 @@ | |||
// +build !ignore_autogenerated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this file being deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a PR went in which included the auto code generation in the build system.
012385f
to
ee329aa
Compare
ee329aa
to
5000e74
Compare
Adding LGTM after Rebase. |
GCE e2e build/test passed for commit 5000e74. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 5000e74. |
Automatic merge from submit-queue |
This PR is tied to this upstream issue #27204
Please note that only the last commit is unique to this PR. The first two commits are from previous PR's.
It introduces a new flag in the Kubelet which can be used to specify if the user wants to use the QoS cgroup hierarchy.
cc @kubernetes/sig-node