-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When a deployment duplicate is created containing a different label, it leads to an orphan ReplicaSet #45770
Comments
When you say "I had a deployment with label A and I went ahead and tried to replace it by introducing a change to its yaml (basically changing it to use label B)" what labels do you mean?
? You probably want #36897 |
@kubernetes/sig-apps-misc |
Number 3: pod template labels (d.spec.template.metadata.labels).
Hm, I'm not sure if 36897 addresses this issue as the actual problem is that, instead of relabelling the existing RS (ReplicaSet) once the deployment is relabelled, it actually creates 2 pods and 2 RSs and when we delete the deployment it gets rid of all pods but it deletes only one of the RSs. |
You can't just change the pod template labels, otherwise once they mismatch with the selector of the Deployment, we return a validation error. Can you post a reproducer of what you are seeing with the exact steps you followed? I just tried to reproduce the case where "when we delete the deployment it gets rid of all pods but it deletes only one of the RSs." w/o any luck. I see the old RS with its Pods untouched. This behavior is expected, a Deployment is selecting its ReplicaSets based on its selector, if your selector changes in a non-overlapping way (ie. the new selector is not a superset of the old selector) then you will end up orphaning all of your previous ReplicaSets. We still need to document that (kubernetes/website#1938). |
Right, so it all comes down to the selector. Thanks for clarifying. |
BUG REPORT:
Kubernetes version:
Environment:
uname -a
):What happened:
From #24137 (comment)
I had a deployment with label A and I went ahead and tried to replace it by introducing a change to its yaml (basically changing it to use label B), the deployment object was replaced successfully but the old pod and the old replica set were still lingering.
After I've deleted the deployment, a replicate set was still lingering there (which was no longer connected to a deployment, i.e., an orphan ReplicaSet).
What you expected to happen:
I was expecting all pods and replica sets to be deleted after deleting the deployment.
How to reproduce it (as minimally and precisely as possible):
Basically we need a deployment, any sample deployment yaml that you can create with
kubectl create -f
(which will naturally create its respective ReplicaSet and Pod (s) ). The trick is to edit the yaml of the existing deployment and change its "label". Once the label is modified, run "kubectl replace -f <yaml_file>" and then check the result as follows.Run "kubectl get pods", you will see two pods in the list, there will be a new pod with the new label and the one with the old label lingers:
Note that we have two ReplicaSets here but only one deployment.
Even after deleting the deployment we can still see the replica set there, i.e., an orphan ReplicaSet.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: