-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Allocable to API Node.Status #13984
Comments
cc/ @kubernetes/goog-node |
Allocable values should be changeable during the node life cycle. In the Mesos case, the resources of a slave might change dynamically (technically when the executor reregisters). It's enough if the values can be patched in the apiserver object, from the executor. |
As long as the config is settable for initial launch of a new kubelet and updatable at runtime, I'm not sure the method is super important. |
The primary operator goal here is that I should be able to eliminate the need to do a static-pod for resource reservation, and the kubelet should support a dynamic resource reservation model for incompressible resources like memory/disk. For things like CPU, I know we have issues where CPU usage spikes as the number of pods on the node increases, but I am less concerned on that in the near term. I need to take a deeper look tomorrow, but I think I recall that there are open issues to resolve around how we re-parent system daemons when running in a systemd environment. Open question: If/when we reparent all containers in a common cgroup based on qos tier, do you guys have any thoughts on differentiating allocable based on qos tier at all? |
I'm not sure I understand the question. Are you proposing having different reservations at different QoS tiers? I don't see how that would work since kubelet doesn't control what is running in the reserved portions. |
Kubelet can auto-detect systemd deployments and avoid re-parenting system daemons.
Are you referring to per qos class quota? If the node exposes detailed usage information, the policy around how the resource are distributed across qos classes can probably be managed in higher layers. |
@vishh - makes sense. |
I am closing this one. We are going to measure once the release is cut and decide the values for those flags. |
Currently Node.Status has Capacity, but no concept of Machine Allocable to serve several purposes:
I proposed
cc/ @bgrant0607 @davidopp @sttts @karlkfi @vishh
The text was updated successfully, but these errors were encountered: