-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow Users to change labels using deployment #14894
Comments
@bgrant0607 A clarification: The knob the decides what happens to orphaned RC (left as is or is scaled down) is with the RC or the deployment that orphaned it? |
Will comment later. This won't go into 1.1. |
Yes. Sorry, I should have mentioned that. |
The knob to control the result after orphaning should be on the deployment I think, because a) it's centralized and can be changed in one place (likely user intent) and b) if you want to control it on a per RC basis, you can already do so in the proposal by clearing the ref from RC to deployment (at which point the deployment policy no longer applies) |
One weird thing to me is that by changing the label, you get different deployment behavior: I've configured a deployment for rolling updates, and I need to change a label in the selector. The deployment for the label change won't be rolling and could violate my availability requirements (new one is scaled up right away, old one is GC'd somehow by another controller later.) Maybe I'm missing the use cases for label changes. Would it be surprising that the deployment mechanics will be totally different when changing label values in the selector? |
controllerRef proposal: #2210 (comment) Overall controller pod management: #14961 I don't think we need a different controller to do deletion of orphaned pods. |
@ironcladlou has a good point. The main use case is adding labels and to the selector to match, probably because the user wants to differentiate the deployment from another they plan to create. The "simple rolling update" implementation does this to add a deployment label, for instance. Adding a 2nd shard is another possible scenario. What the user probably wants is the new selector to be used for the new RC, but the old ones to be spun down as with any rolling update. If the template labels and selector are the only changes, we could eventually do an in-place udpate #9043, like in "simple rolling update". |
Isn't it a different logical controller loop? We need to reconcile the set On Oct 2, 2015, at 5:45 AM, Brian Grant notifications@github.com wrote: controllerRef proposal: #2210 (comment) Overall controller pod management: #14961 I don't think we need a different controller to do deletion of orphaned — |
See also #36897 |
What is described in the initial post is already implemented - once a Deployment is not matching a set of ReplicaSets because it has a different selector, those ReplicaSets are released (owner refs are removed). If there is anything else we need to do on this issue, please reopen it. |
Based on discussion with @bgrant0607
Problem:
Deployment.Spec.Selector is used to select old pods (which are scaled down) and is also used as the selector for the new RC (that is scaled up).
Hence labels on both old and new pods must match Deployment.Spec.Selector. As a result, it is not possible to change the value of a label that is part of Deployment.Spec.Selector. It is however possible to add a label to the new pod.
Possible solution:
Introduce ObjectRef to keep track of parent objects (RCs will ref to Deployments) and allow RCs to be orphaned. Users get to choose what to do with orphaned RCs - they can be left as is or can be scaled down and deleted.
When user wants to change a label value, they change DeploymentSpec.Selector to match the new pod labels, so that deployment creates a new RC and scales it up. A different controller will handle orphanedRCs (RC is orphaned if it points to a deployment via its ObjectRef, but the Deployment's selector does not target that RC).
@ghodss @ironcladlou @smarterclayton
The text was updated successfully, but these errors were encountered: