Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Users to change labels using deployment #14894

Closed
nikhiljindal opened this issue Oct 1, 2015 · 10 comments
Closed

Allow Users to change labels using deployment #14894

nikhiljindal opened this issue Oct 1, 2015 · 10 comments
Labels
area/workload-api/deployment area/workload-api/replicaset priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@nikhiljindal
Copy link
Contributor

Based on discussion with @bgrant0607

Problem:
Deployment.Spec.Selector is used to select old pods (which are scaled down) and is also used as the selector for the new RC (that is scaled up).
Hence labels on both old and new pods must match Deployment.Spec.Selector. As a result, it is not possible to change the value of a label that is part of Deployment.Spec.Selector. It is however possible to add a label to the new pod.

Possible solution:
Introduce ObjectRef to keep track of parent objects (RCs will ref to Deployments) and allow RCs to be orphaned. Users get to choose what to do with orphaned RCs - they can be left as is or can be scaled down and deleted.
When user wants to change a label value, they change DeploymentSpec.Selector to match the new pod labels, so that deployment creates a new RC and scales it up. A different controller will handle orphanedRCs (RC is orphaned if it points to a deployment via its ObjectRef, but the Deployment's selector does not target that RC).

@ghodss @ironcladlou @smarterclayton

@nikhiljindal
Copy link
Contributor Author

@bgrant0607 A clarification: The knob the decides what happens to orphaned RC (left as is or is scaled down) is with the RC or the deployment that orphaned it?

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. team/ux labels Oct 1, 2015
@bgrant0607
Copy link
Member

Will comment later. This won't go into 1.1.

@nikhiljindal
Copy link
Contributor Author

Yes. Sorry, I should have mentioned that.

@smarterclayton
Copy link
Contributor

The knob to control the result after orphaning should be on the deployment I think, because a) it's centralized and can be changed in one place (likely user intent) and b) if you want to control it on a per RC basis, you can already do so in the proposal by clearing the ref from RC to deployment (at which point the deployment policy no longer applies)

@ironcladlou
Copy link
Contributor

One weird thing to me is that by changing the label, you get different deployment behavior: I've configured a deployment for rolling updates, and I need to change a label in the selector. The deployment for the label change won't be rolling and could violate my availability requirements (new one is scaled up right away, old one is GC'd somehow by another controller later.)

Maybe I'm missing the use cases for label changes. Would it be surprising that the deployment mechanics will be totally different when changing label values in the selector?

@bgrant0607 bgrant0607 added kind/feature Categorizes issue or PR as related to a new feature. kind/enhancement and removed kind/feature Categorizes issue or PR as related to a new feature. labels Oct 1, 2015
@bgrant0607
Copy link
Member

controllerRef proposal: #2210 (comment)

Overall controller pod management: #14961

I don't think we need a different controller to do deletion of orphaned pods.

@bgrant0607
Copy link
Member

@ironcladlou has a good point.

The main use case is adding labels and to the selector to match, probably because the user wants to differentiate the deployment from another they plan to create. The "simple rolling update" implementation does this to add a deployment label, for instance. Adding a 2nd shard is another possible scenario.

What the user probably wants is the new selector to be used for the new RC, but the old ones to be spun down as with any rolling update.

If the template labels and selector are the only changes, we could eventually do an in-place udpate #9043, like in "simple rolling update".

@smarterclayton
Copy link
Contributor

Isn't it a different logical controller loop? We need to reconcile the set
of RCs or Pods to the set of valid Deployments instead of vice versa?

On Oct 2, 2015, at 5:45 AM, Brian Grant notifications@github.com wrote:

controllerRef proposal: #2210 (comment)
#2210 (comment)

Overall controller pod management: #14961
#14961

I don't think we need a different controller to do deletion of orphaned
pods.


Reply to this email directly or view it on GitHub
#14894 (comment)
.

@bgrant0607 bgrant0607 added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed team/ux (deprecated - do not use) labels Feb 24, 2017
@bgrant0607
Copy link
Member

See also #36897

@0xmichalis
Copy link
Contributor

What is described in the initial post is already implemented - once a Deployment is not matching a set of ReplicaSets because it has a different selector, those ReplicaSets are released (owner refs are removed). If there is anything else we need to do on this issue, please reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/workload-api/deployment area/workload-api/replicaset priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests

5 participants