Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When a deployment duplicate is created containing a different label, it leads to an orphan ReplicaSet #45770

Closed
themarcelor opened this issue May 13, 2017 · 5 comments
Labels
area/workload-api/deployment sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@themarcelor
Copy link

themarcelor commented May 13, 2017

BUG REPORT:

Kubernetes version:

$ kc version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS:

$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)

  • Kernel (e.g. uname -a):

$ uname -a
Linux ip-10-0-0-150.us-west-2.compute.internal 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:
  • Others:

What happened:
From #24137 (comment)

I had a deployment with label A and I went ahead and tried to replace it by introducing a change to its yaml (basically changing it to use label B), the deployment object was replaced successfully but the old pod and the old replica set were still lingering.

After I've deleted the deployment, a replicate set was still lingering there (which was no longer connected to a deployment, i.e., an orphan ReplicaSet).

What you expected to happen:

I was expecting all pods and replica sets to be deleted after deleting the deployment.

How to reproduce it (as minimally and precisely as possible):
Basically we need a deployment, any sample deployment yaml that you can create with kubectl create -f(which will naturally create its respective ReplicaSet and Pod (s) ). The trick is to edit the yaml of the existing deployment and change its "label". Once the label is modified, run "kubectl replace -f <yaml_file>" and then check the result as follows.

Run "kubectl get pods", you will see two pods in the list, there will be a new pod with the new label and the one with the old label lingers:

$ kubectl get pod mytest-console-1569190182-b4xhs -o=json | grep -A1 -i label
        "labels": {`
            "app": "mytest-console",`
$ kubectl get pod mytest-console-3610439534-x6eaz -o=json | grep -A1 -i label
        "labels": {`
            "app": "mytest-job",`
$ kubectl get deployments | grep mytest
mytest-console              1         1         1            1           8m
$ kubectl get rs | grep mytest
mytest-console-1569190182              1         1         1         8m
mytest-console-3610439534              1         1         1         1m

Note that we have two ReplicaSets here but only one deployment.

$ kubectl delete deployment mytest-console
deployment "mytest-console" deleted
$ kubectl get rs | grep mytest
mytest-console-1569190182              1         1         1         9m

Even after deleting the deployment we can still see the replica set there, i.e., an orphan ReplicaSet.

Anything else we need to know:

@0xmichalis
Copy link
Contributor

When you say "I had a deployment with label A and I went ahead and tried to replace it by introducing a change to its yaml (basically changing it to use label B)" what labels do you mean?

  1. Metadata labels (d.labels)
  2. Label selector (d.spec.selector)
  3. or podtemplate labels (d.spec.template.labels)

?

You probably want #36897

@0xmichalis
Copy link
Contributor

@kubernetes/sig-apps-misc

@k8s-ci-robot k8s-ci-robot added the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label May 14, 2017
@themarcelor
Copy link
Author

Number 3: pod template labels (d.spec.template.metadata.labels).

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mytest-console
spec:
  template:
    metadata:
      labels:
        name: mytest-console
    spec:

Hm, I'm not sure if 36897 addresses this issue as the actual problem is that, instead of relabelling the existing RS (ReplicaSet) once the deployment is relabelled, it actually creates 2 pods and 2 RSs and when we delete the deployment it gets rid of all pods but it deletes only one of the RSs.

@0xmichalis
Copy link
Contributor

0xmichalis commented May 17, 2017

You can't just change the pod template labels, otherwise once they mismatch with the selector of the Deployment, we return a validation error. Can you post a reproducer of what you are seeing with the exact steps you followed?

I just tried to reproduce the case where "when we delete the deployment it gets rid of all pods but it deletes only one of the RSs." w/o any luck. I see the old RS with its Pods untouched. This behavior is expected, a Deployment is selecting its ReplicaSets based on its selector, if your selector changes in a non-overlapping way (ie. the new selector is not a superset of the old selector) then you will end up orphaning all of your previous ReplicaSets. We still need to document that (kubernetes/website#1938).

@themarcelor
Copy link
Author

Right, so it all comes down to the selector. Thanks for clarifying.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/workload-api/deployment sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests

3 participants