Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow deployment to update pod labels #14215

Closed
wants to merge 1 commit into from

Conversation

nikhiljindal
Copy link
Contributor

Ref #1743

Right now, we enforce that PodTemplateSpec.Labels match with Deployment.Spec.Selector, ie the new pods should have same labels as the old pods that deployment selects.
This PR remove this restriction.

@nikhiljindal
Copy link
Contributor Author

cc @ghodss @ironcladlou

@k8s-github-robot k8s-github-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Sep 19, 2015
@k8s-github-robot
Copy link

Labelling this PR as size/XS

@k8s-bot
Copy link

k8s-bot commented Sep 19, 2015

GCE e2e build/test passed for commit 8e94a07.

@ghodss
Copy link
Contributor

ghodss commented Sep 19, 2015

What's the use case for having them be different?

@nikhiljindal
Copy link
Contributor Author

I sent this as a separate PR to get everyones view on should we allow users to change labels while they are deploying a new RC.
For ex: to change the label from tier:beta to tier:ga when they update the image from 0.9 to 1.0.

cc @bgrant0607 @smarterclayton

@ghodss
Copy link
Contributor

ghodss commented Sep 21, 2015

Wouldn't that prevent the deployment object from being able to select the right pods to scale down the previous deployment?

Although, if that's the case, how would someone actually change the labels of an existing deployment object and its pods?

@nikhiljindal
Copy link
Contributor Author

Deployment.Selector will determine the old pods that should be scaled down.
Deployment.Spec.PodTemplateSpec will determine the RC that should be scaled up.

Deployment.Selector does not need to be same as Deployment.Spec.PodTemplateSpec.Label.

Changing the labels of a deployment will have no effect on any other RC or pod.
With this change, we will allow users to create a deployment to update labels of a pod (for which I am asking if it is a valid use case to support)

@ghodss
Copy link
Contributor

ghodss commented Sep 21, 2015

If Deployment.Selector is not the same as Deployment.Spec.PodTemplateSpec.Label, does that leave open the possibility that you could have a runaway deployment controller which creates infinite replication controllers because they never match the deployment object? Perhaps they don't need to be equal, but they must at least match?

@k8s-bot
Copy link

k8s-bot commented Sep 21, 2015

GCE e2e build/test passed for commit 8e94a07.

@smarterclayton
Copy link
Contributor

I think updating the pod labels is a valid use case - we've actually had it come up several times. I have a deployment Foo that uses label a=b. I want to shard Foo. I edit and deploy Foo to use label a=b,shard=1, then create Bar, which uses label a=b,shard=2. So I see editing pod labels over the lifetime of a deployment as required unless we want people to delete their existing deployments and recreate them (seems bad)

@bgrant0607
Copy link
Member

I don't understand the logic behind this change. The selector MUST match the labels in the pod template. The pod template labels can either be changed in a way that they don't alter that property, or the selector can be changed to match the new labels. In the latter case, the question then is what Deployment does about the "orphaned" RCs and pods, and how it finds them.

@bgrant0607
Copy link
Member

Discussed offline: This is not the right way to solve this problem.

@bgrant0607 bgrant0607 closed this Sep 24, 2015
@nikhiljindal
Copy link
Contributor Author

#14894 details a possible solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants