Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl delete replication controller hangs for the same spec but with different name #8598

Closed
yifan-gu opened this issue May 21, 2015 · 7 comments

Comments

@yifan-gu
Copy link
Contributor

$ cat t1.yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
  name: kube-nginx1
  namespace: default
spec:
  replicas: 1
  selector:
    k8s-app: kube-nginx1
  template:
    metadata:
      labels:
        k8s-app: kube-nginx1
    spec:
      containers:
      - name: nginx
        image: nginx

$ cat t2.yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
  name: kube-nginx2
  namespace: default
spec:
  replicas: 1
  selector:
    k8s-app: kube-nginx1
  template:
    metadata:
      labels:
        k8s-app: kube-nginx1
    spec:
      containers:
      - name: nginx
        image: nginx

$ kubectl create -f t1.yaml
replicationcontrollers/kube-nginx1
$ kubectl create -f t2.yaml
replicationcontrollers/kube-nginx2
$ kubectl delete -f t2.yaml (hangs for several minutes)

$ kubectl get rc
kube-nginx1                      nginx                   nginx                                            k8s-app=kube-nginx1                        1
kube-nginx2                      nginx                   nginx                                            k8s-app=kube-nginx1                        0
@yifan-gu yifan-gu changed the title kubectl delete replication controller hangs forever for the same spec but with different name kubectl delete replication controller hangs for the same spec but with different name May 21, 2015
@yifan-gu
Copy link
Contributor Author

Well, it's not hanging forever, but takes abnormal long time to return.

@yujuhong
Copy link
Contributor

@yifan-gu, we don't check if the replication controllers have the same selector, but we should (#2210). It should not allow you to create the second replication controller.

@yifan-gu
Copy link
Contributor Author

@yujuhong I was expecting this because it is confusing what if the two rc have different numbers...

@derekwaynecarr
Copy link
Member

What if you do --cascade=false for the delete command? I hope that actually completes.

Sent from my iPhone

On May 20, 2015, at 8:03 PM, Yifan Gu notifications@github.com wrote:

$ cat t1.yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: kube-nginx1
namespace: default
spec:
replicas: 1
selector:
k8s-app: kube-nginx1
template:
metadata:
labels:
k8s-app: kube-nginx1
spec:
containers:
- name: nginx
image: nginx

$ cat t2.yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: kube-nginx2
namespace: default
spec:
replicas: 1
selector:
k8s-app: kube-nginx1
template:
metadata:
labels:
k8s-app: kube-nginx1
spec:
containers:
- name: nginx
image: nginx

$ kubectl create -f t1.yaml
replicationcontrollers/kube-nginx1
$ kubectl create -f t2.yaml
replicationcontrollers/kube-nginx2
$ kubectl delete -f t2.yaml (hangs forever, never returns)

$ kubectl get rc
kube-nginx1 nginx nginx k8s-app=kube-nginx1 1
kube-nginx2 nginx nginx k8s-app=kube-nginx1 0

Reply to this email directly or view it on GitHub.

@yifan-gu
Copy link
Contributor Author

@derekwaynecarr --cascade=false works, seems should be covered by #2210. Thanks!

@ye
Copy link

ye commented Sep 20, 2016

@derekwaynecarr --cascade=false indeed worked for me as well. However, without cleaning out the resources in a cascade fashion, would it eventually clog up k8s server? Who's going to do the garbage collection?

@derekwaynecarr
Copy link
Member

server side garbage collection is alpha and on by default in 1.4.

On Tuesday, September 20, 2016, Ye Wang notifications@github.com wrote:

@derekwaynecarr https://github.com/derekwaynecarr --cascade=false
indeed worked for me as well. However, without cleaning out the resources
in a cascade fashion, would it eventually clog up k8s server? Who's going
to do the garbage collection?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#8598 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF8dbGKQqQTdY8h7HJBn-H14C8PnI81lks5qsDsugaJpZM4EinI7
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants