-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-place rolling updates #9043
Comments
I agree we'll eventually want this, but... There are very few in-place, non-disruptive updates that we can actually do right now. For instance, Docker doesn't support resource changes, so we'll need to work around that using cgroup_parent. Resource changes will also complicate admission control in kubelet. Decoupling lifetime of local storage from the pod is discussed in #7562 and #598, which would reduce the number of circumstances where in-place updates would be required. |
This would be wildly valuable - specifically for kubernetes + elastic-search, where data nodes really shouldn't be losing data. |
It seems #6099 is also talking about this. |
What's the difference between this proposal and "kubectl replace/patch"? |
@kubernetes/huawei |
https://blog.docker.com/2016/02/docker-1-10/ Live update container resource constraints: When setting limits on what resources containers can use (e.g. memory usage), you had to restart the container to change them. You can now update these resource constraints on the fly with the new docker update command. |
/inplaceupdate subresource on pod might be a good way to express pod updates (resource as well as container image) |
Mentioned in #13488 is an important use case for me: updating secrets. Specifically if we have a load balancer referencing certificates in a Secret and we want to refresh those certificates in a rolling fashion to all pods in the deployment/etc. |
@davidopp May I ask what the benefit can we get from in-place rolling update? Is it about scheduling? IIRC, if we update pod.spec, kubelet will sync the pod. So, what we really need is just updating the spec of a running pod, kubelet will watched the change and do all the remaining task. Then, why do we need /inplaceupdate subresource on pod? Is it because /inplaceupdate will update the spec as well as the status, so we can wait kubelet sync the new spec and report the latest status? |
so any update on this feature request? |
@xiaods we have started a new merged design KEP and i'm working on it. this has taken me longer than i had initially hoped due to some other tasks taking priority. i'll be updating the new KEP document with my suggested flow control mechanism in that discussion and pushing it out for review in a day or two. |
@vinaykul yes, this is very complicated concerns for how-to implement in-place task. i will look for this review docs. thanks a lot. |
@vinaykul Is there any quick "hack" to resize a particular container without killing it? Something like disabling scheduler for a short while and perform "docker update", do whatever I want to do (in my case some benchmarks), and then turn on scheduler again (I don't care if scheduler kills it after I finished my task). I need something like this for a specific resource tuning project. |
We have an implementation for v1.11 based on our original design. You may have to port it to the version you're interested in, and may need bug fixes - we are not maintaining it anymore as the current design being considered for upstream is significantly different. Hope this helps. |
@vinaykul Very nice. Just 2 more questions: Is this In-Place resizing holds the QoS types (guaranteed, burstable,best-effort)? Also, is it compatible with the static CPU manager policy? |
Resizing applies to Guaranteed and burstable QoS classes, and changing QoS class of a running Pod is not allowed. That code does not check for CPU manager policy - so it would incorrectly update non-integral requests for static policy nodes. |
@bgrant0607 @davidopp @kow3ns is this still in the plans ? |
Yes. @thockin reviewed and approved the KEP, and I'm working on resolving a couple of changes requested from KubeCon API review session, and fleshing out the test plans and GA criteria sections in order to get the KEP into implementable stage before Jan 28 date for 1.18 release. |
@vinaykul What is the status for In-Place Pod resource change? We have similar requirements like yours. Hope you can give us some info. |
We are revisiting the API changes that we previously thought were good. For details, please see kubernetes/enhancements#1883 I hope to find some time in the coming weeks to refactor PR #92127 as per the above discussion and check the implementation for robustness (I'm about half-way done, but I have some higher priority work) |
OpenKruise (https://openkruise.io/en-us/index.html) offers a well documented solution through the use of CustomResourceDefinitions. It would be valuable to include these controllers in vanilla Kubernetes since they seem to resolve the challenges mentioned above. |
@vinaykul Hi, |
It seems Kruise resolve the update container in-place, but not the request & limit of container. |
Yeah, OpenKruise v1.0 now supports update images and env/command/args (via Downward API) in-place (https://openkruise.io/docs/core-concepts/inplace-update), but it can not modify request & limit which will break the logic of scheduler and kubelet. |
FYI #102884 this pr try to implement In-place Pod Vertical Scaling which may change pod's request & limit |
/assign vinaykul |
Is there any hack we can use to read the updated config map values without restarting a pod? |
Doesn't configmap already do that? (if not using subPath) |
No its not working. |
Our current rolling update scheme requires deleting a pod and creating a replacement. We should also have rolling "in-place" update, of both containers and pods. There are two kinds of in-place update
The motivation is that there's no reason to kill the pod if you don't need to. The user may have local data stored in it that they don't want blown away due to update.
The text was updated successfully, but these errors were encountered: