Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deleting a namespace sometimes leaves orphaned deployment and pods which can't be deleted #36891

Closed
wallrj opened this issue Nov 16, 2016 · 49 comments
Assignees
Labels
area/api Indicates an issue on api area. area/apiserver area/HA lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@wallrj
Copy link
Contributor

wallrj commented Nov 16, 2016

I've got my kubernetes cluster into a state where there are pods and a deployment that belong to a deleted namespace.

I deleted the namespace (by calling the REST API directly).

And I can't delete the orphaned resources.

root@acceptance-test-richardw-l2ljj4wewubtc-0:~# kubectl get --show-all --all-namespaces pods
NAMESPACE                                                         NAME                                                               READY     STATUS    RESTARTS   AGE
kube-system                                                       dummy-2088944543-v5fg8                                             1/1       Running   1          1d
kube-system                                                       etcd-acceptance-test-richardw-l2ljj4wewubtc-0                      1/1       Running   1          1d
kube-system                                                       kube-apiserver-acceptance-test-richardw-l2ljj4wewubtc-0            1/1       Running   1          1d
kube-system                                                       kube-controller-manager-acceptance-test-richardw-l2ljj4wewubtc-0   1/1       Running   3          1d
kube-system                                                       kube-discovery-1150918428-f4k00                                    1/1       Running   1          1d
kube-system                                                       kube-dns-654381707-ywhvt                                           3/3       Running   0          1d
kube-system                                                       kube-proxy-5yycz                                                   1/1       Running   0          1d
kube-system                                                       kube-proxy-pbft1                                                   1/1       Running   0          1d
kube-system                                                       kube-scheduler-acceptance-test-richardw-l2ljj4wewubtc-0            1/1       Running   3          1d
kube-system                                                       weave-net-czyj0                                                    2/2       Running   0          1d
kube-system                                                       weave-net-obiaa                                                    2/2       Running   0          1d
t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385   nginx-deployment-4087004473-7hh33                                  1/1       Running   0          2h
t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385   nginx-deployment-4087004473-dfg1d                                  1/1       Running   0          2h
t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385   nginx-deployment-4087004473-fxmfr                                  1/1       Running   0          2h
root@acceptance-test-richardw-l2ljj4wewubtc-0:~# kubectl get --show-all namespaces                                                                                       
NAME          STATUS    AGE
default       Active    2h
kube-system   Active    2h
root@acceptance-test-richardw-l2ljj4wewubtc-0:~# kubectl delete --namespace t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385 pod nginx-deployment-4087004473-fxmfr
Error from server: namespaces "t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385" not found
root@acceptance-test-richardw-l2ljj4wewubtc-0:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
root@acceptance-test-richardw-l2ljj4wewubtc-0:~# cat /etc/os-release 
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial
uname -a
Linux acceptance-test-richardw-l2ljj4wewubtc-0 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Cloud: Google Cloud Platform
Installed according to kubeadm documentation http://kubernetes.io/docs/getting-started-guides/kubeadm/

@liggitt
Copy link
Member

liggitt commented Nov 16, 2016

cc @derekwaynecarr

do you know what admission plugins you have enabled, and if you have the NamespaceLifecycle plugin enabled?

@rothgar
Copy link
Member

rothgar commented Nov 17, 2016

I've seen this too in my test cluster. I'll have to verify what's enabled on my api server.

@liggitt
Copy link
Member

liggitt commented Nov 17, 2016

can you grab the yaml of the stuck objects? do they have finalizers or ownerRefs set in their metadata?

@liggitt
Copy link
Member

liggitt commented Nov 17, 2016

@caesarxuchao any idea if garbage collection could make the namespace controller think it had cleaned everything up, but still leave objects existing?

@liggitt liggitt added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. area/apiserver team/api and removed area/kubectl team/ux labels Nov 17, 2016
@liggitt liggitt added this to the v1.5 milestone Nov 17, 2016
@dims
Copy link
Member

dims commented Nov 17, 2016

@liggitt : if this is a non-release-blocker, please add that label :)

@liggitt
Copy link
Member

liggitt commented Nov 17, 2016

if it's recreatable, it's a release blocker

@rothgar
Copy link
Member

rothgar commented Nov 17, 2016

My apiserver (1.4.4) has the following plugins
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota

I don't have a reproducible but know I have seen it before

@wallrj
Copy link
Contributor Author

wallrj commented Nov 17, 2016

Mine's been configured by kubeadm. Here's a snippet of the docker inspect` output:

# docker inspect k8s_kube-apiserver.67fed3c3_kube-apiserver-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_21cc3e1299bfcfd3c8fd311efa792984_b3c9db7f
...
            "Image": "gcr.io/google_containers/kube-apiserver-amd64:v1.4.4",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "kube-apiserver",
                "--v=2",
                "--insecure-bind-address=127.0.0.1",
                "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
                "--service-cluster-ip-range=10.96.0.0/12",
                "--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
                "--client-ca-file=/etc/kubernetes/pki/ca.pem",
                "--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
                "--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
                "--token-auth-file=/etc/kubernetes/pki/tokens.csv",
                "--secure-port=6443",
                "--allow-privileged",
                "--advertise-address=10.240.0.4",
                "--etcd-servers=http://127.0.0.1:2379"
            ],

@wallrj
Copy link
Contributor Author

wallrj commented Nov 17, 2016

I've attempted to export my pods, deployments and namespaces using the following command kubectl get --show-all --all-namespaces --output yaml --export ...
kubernetes_resources.zip

@caesarxuchao
Copy link
Member

@wallrj in the title you said deployments were left behind, but the pasted output only showed pods. Could you confirm if the deployment objects were left behind?

@wallrj
Copy link
Contributor Author

wallrj commented Nov 17, 2016

⬆️ See zip file attached above.

@caesarxuchao
Copy link
Member

Thanks. The deployment that got left behind:

- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    creationTimestamp: 2016-11-16T11:42:48Z
    generation: 2
    labels:
      app: nginx
    name: nginx-deployment
    namespace: t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385
    resourceVersion: "106662"
    selfLink: /apis/extensions/v1beta1/namespaces/t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385/deployments/nginx-deployment
    uid: cc6d50b8-abf1-11e6-b9f6-42010af00004
  spec:
    replicas: 3
    selector:
      matchLabels:
        app: nginx
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 1
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: nginx
      spec:
        containers:
        - image: nginx:1.7.9
          imagePullPolicy: IfNotPresent
          name: nginx
          ports:
          - containerPort: 80
            protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        securityContext: {}
        terminationGracePeriodSeconds: 30
  status:
    availableReplicas: 3
    observedGeneration: 2
    replicas: 3
    updatedReplicas: 3
kind: List
metadata: {}

Looks like it's not caused by the garbage collector. @liggitt are there other suspects?

@caesarxuchao
Copy link
Member

caesarxuchao commented Nov 17, 2016

The left-behind pod doesn't have a deletion timestamp:

- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/created-by: |
        {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385","name":"nginx-deployment-4087004473","uid":"cc6dd651-abf1-11e6-b9f6-42010af00004","apiVersion":"extensions","resourceVersion":"106622"}}
    creationTimestamp: 2016-11-16T11:42:48Z
    generateName: nginx-deployment-4087004473-
    labels:
      app: nginx
      pod-template-hash: "4087004473"
    name: nginx-deployment-4087004473-dfg1d
    namespace: t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385
    ownerReferences:
    - apiVersion: extensions/v1beta1
      controller: true
      kind: ReplicaSet
      name: nginx-deployment-4087004473
      uid: cc6dd651-abf1-11e6-b9f6-42010af00004
    resourceVersion: "106661"
    selfLink: /api/v1/namespaces/t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385/pods/nginx-deployment-4087004473-dfg1d
    uid: cc704cfe-abf1-11e6-b9f6-42010af00004

Could this be a race? Can user create an object when the namespace is in the terminating state? @derekwaynecarr

@wallrj
Copy link
Contributor Author

wallrj commented Nov 17, 2016

Could this be a race? Can user create an object when the namespace is in the terminating state?

Sounds plausible. My tests originally created a namespace on setUp, create a Deployment, then immediately delete the namespace in tearDown ...perhaps before the deployment pods had been created / launched.

@liggitt
Copy link
Member

liggitt commented Nov 17, 2016

just to double check, single etcd server, single apiserver?

@wallrj
Copy link
Contributor Author

wallrj commented Nov 17, 2016

I think so...how can I check?

# docker ps
CONTAINER ID        IMAGE                                                           COMMAND                  CREATED             STATUS              PORTS               NAMES
11c6935eda92        gcr.io/google_containers/kube-controller-manager-amd64:v1.4.4   "kube-controller-mana"   31 hours ago        Up 31 hours                             k8s_kube-controller-manager.edfea2a0_kube-controller-manager-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_aa642bd88d7c4f8636a7f3d6df731565_9c49f5fb
60ff677342db        gcr.io/google_containers/kube-scheduler-amd64:v1.4.4            "kube-scheduler --v=2"   31 hours ago        Up 31 hours                             k8s_kube-scheduler.c164d8b1_kube-scheduler-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_87687297b5040798a1737ed4061b3dab_91c0ef4f
9bb059aeca61        nginx:1.7.9                                                     "nginx -g 'daemon off"   31 hours ago        Up 31 hours                             k8s_nginx.f5cb328e_nginx-deployment-4087004473-fxmfr_t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385_cc6fbcdb-abf1-11e6-b9f6-42010af00004_9b9d7a3c
3abda6bfaa3a        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 31 hours ago        Up 31 hours                             k8s_POD.d8dbe16c_nginx-deployment-4087004473-fxmfr_t-kubernetesplugin-kubernetesplugintests-test-create-pod-355385_cc6fbcdb-abf1-11e6-b9f6-42010af00004_c20d3bf1
0f89fd214e4a        gcr.io/google_containers/exechealthz-amd64:1.1                  "/exechealthz '-cmd=n"   2 days ago          Up 2 days                               k8s_healthz.13f3684f_kube-dns-654381707-ywhvt_kube-system_40be01fa-ab24-11e6-aab9-42010af00004_02c21998
c8b56bd5ccb6        gcr.io/google_containers/kube-dnsmasq-amd64:1.3                 "/usr/sbin/dnsmasq --"   2 days ago          Up 2 days                               k8s_dnsmasq.45cf67a6_kube-dns-654381707-ywhvt_kube-system_40be01fa-ab24-11e6-aab9-42010af00004_fa7b9b01
362cd29e0cba        gcr.io/google_containers/kubedns-amd64:1.7                      "/kube-dns --domain=c"   2 days ago          Up 2 days                               k8s_kube-dns.c35f8fb3_kube-dns-654381707-ywhvt_kube-system_40be01fa-ab24-11e6-aab9-42010af00004_66140880
02bd066f7068        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-dns-654381707-ywhvt_kube-system_40be01fa-ab24-11e6-aab9-42010af00004_551e2fb7
1ed78a9fee7c        weaveworks/weave-npc:latest                                     "/usr/bin/weave-npc"     2 days ago          Up 2 days                               k8s_weave-npc.a795942e_weave-net-obiaa_kube-system_41458481-ab24-11e6-aab9-42010af00004_097e3e7d
f54c67c389ad        weaveworks/weave-kube:latest                                    "/home/weave/launch.s"   2 days ago          Up 2 days                               k8s_weave.25a9abdf_weave-net-obiaa_kube-system_41458481-ab24-11e6-aab9-42010af00004_877f2b84
4a97839cc812        gcr.io/google_containers/kube-proxy-amd64:v1.4.4                "kube-proxy --v=2 --k"   2 days ago          Up 2 days                               k8s_kube-proxy.2e54b651_kube-proxy-5yycz_kube-system_40b9c82d-ab24-11e6-aab9-42010af00004_745c55bd
ab948e85588e        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-scheduler-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_87687297b5040798a1737ed4061b3dab_c9dd4912
382ed9c6acf0        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_weave-net-obiaa_kube-system_41458481-ab24-11e6-aab9-42010af00004_d74c291e
387c2f64b674        gcr.io/google_containers/kube-apiserver-amd64:v1.4.4            "kube-apiserver --v=2"   2 days ago          Up 2 days                               k8s_kube-apiserver.67fed3c3_kube-apiserver-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_21cc3e1299bfcfd3c8fd311efa792984_b3c9db7f
c306725b1864        gcr.io/google_containers/kube-discovery-amd64:1.0               "/usr/local/bin/kube-"   2 days ago          Up 2 days                               k8s_kube-discovery.61afcdf5_kube-discovery-1150918428-f4k00_kube-system_3bf46152-ab24-11e6-aab9-42010af00004_802bc1d6
e49bd2eb1682        gcr.io/google_containers/etcd-amd64:2.2.5                       "etcd --listen-client"   2 days ago          Up 2 days                               k8s_etcd.5a0e984b_etcd-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_a81a848a0bf7f0d38bc5f9b6b3a7140a_60cab3d5
1a751075bd3b        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_dummy.80f41be0_dummy-2088944543-v5fg8_kube-system_3b5670e8-ab24-11e6-aab9-42010af00004_89b7b29a
2b939a931f5c        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-apiserver-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_21cc3e1299bfcfd3c8fd311efa792984_cf6e310a
cb12111fffaa        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-controller-manager-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_aa642bd88d7c4f8636a7f3d6df731565_9c573843
faea5f30a83e        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-discovery-1150918428-f4k00_kube-system_3bf46152-ab24-11e6-aab9-42010af00004_d638c5d1
6b54a0d40bc2        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_etcd-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_a81a848a0bf7f0d38bc5f9b6b3a7140a_67e6b71b
bc79f52ed9fe        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_dummy-2088944543-v5fg8_kube-system_3b5670e8-ab24-11e6-aab9-42010af00004_6aba46fc
9ea3ab6a6803        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_kube-proxy-5yycz_kube-system_40b9c82d-ab24-11e6-aab9-42010af00004_15889443
root@acceptance-test-richardw-l2ljj4wewubtc-0:~# docker ps | grep etcd
e49bd2eb1682        gcr.io/google_containers/etcd-amd64:2.2.5                       "etcd --listen-client"   2 days ago          Up 2 days                               k8s_etcd.5a0e984b_etcd-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_a81a848a0bf7f0d38bc5f9b6b3a7140a_60cab3d5
6b54a0d40bc2        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 2 days ago          Up 2 days                               k8s_POD.d8dbe16c_etcd-acceptance-test-richardw-l2ljj4wewubtc-0_kube-system_a81a848a0bf7f0d38bc5f9b6b3a7140a_67e6b71b

@liggitt
Copy link
Member

liggitt commented Nov 17, 2016

looks like an etcd cluster of 3, and a single apiserver

edit: nm, missed that was three docker ps calls

@rothgar
Copy link
Member

rothgar commented Nov 17, 2016

My environment is 3 etcd and 3 apiservers

@grodrigues3
Copy link
Contributor

@liggitt if it is indeed reproducible, please add the release-blocker label

@liggitt liggitt self-assigned this Nov 18, 2016
@derekwaynecarr
Copy link
Member

i acknowledge that we will have this discussion again in 4 mos, and we will discuss if the presence of user api servers makes this better/worse ;-)

@smarterclayton
Copy link
Contributor

smarterclayton commented Mar 17, 2017 via email

@bgrant0607 bgrant0607 added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed team/api (deprecated - do not use) labels Mar 21, 2017
@lavalamp
Copy link
Member

In or out for 1.7?

@liggitt
Copy link
Member

liggitt commented May 28, 2017

I don't really see any new options for 1.7

@deads2k
Copy link
Contributor

deads2k commented Jun 2, 2017

I'm not aware of reoccurrences since #37431 was merged,

Oh wow. That was you?!

@liggitt
Copy link
Member

liggitt commented Jun 2, 2017

Oh wow. That was you?!

You were there too, you've just blocked it out.

@enisoc enisoc removed the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label Jun 9, 2017
@lavalamp lavalamp removed this from the v1.7 milestone Jun 14, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@maroda
Copy link

maroda commented Jul 12, 2019

This is happening to me, but I don't recall removing the istio-system namespace when it started, and that's apparently the namespace that got orphaned. The only reason I found it was because it started showing up in my log aggregation, e.g.:

{"timestamp":1562969399610,"log":{"level":"info","time":"2019-07-12T22:09:59.505595Z","instance":"accesslog.instance.istio-system","apiClaims":"","apiKey":"","clientTraceId":"","connection_security_policy":"none","destinationApp":"bacque","destinationIp":"192.168.30.204","destinationName":"bacque-v012-6f99d6f697-h52ls","destinationNamespace":"crq","destinationOwner":"kubernetes://apis/apps/v1/namespaces/crq/deployments/bacque-v012","destinationPrincipal":"","destinationServiceHost":"bacque.crq.svc.cluster.local","destinationWorkload":"bacque-v012","grpcMessage":"","grpcStatus":"","httpAuthority":"bacque:9999","latency":"64.368133ms","method":"GET","permissiveResponseCode":"none","permissiveResponsePolicyID":"none","protocol":"http","receivedBytes":613,"referer":"","reporter":"destination","requestId":"e96fdbc7-7b39-9a1e-8da5-3e79eaa1ef86","requestSize":0,"requestedServerName":"","responseCode":200,"responseFlags":"-","responseSize":65,"responseTimestamp":"2019-07-12T22:09:59.569854Z","sentBytes":209,"sourceApp":"craque","sourceIp":"192.168.197.18","sourceName":"craque-v012-6d6f84c774-9f9vt","sourceNamespace":"crq","sourceOwner":"kubernetes://apis/apps/v1/namespaces/crq/deployments/craque-v012","sourcePrincipal":"","sourceWorkload":"craque-v012","url":"/fetch","userAgent":"craquego","xForwardedFor":"0.0.0.0"},"stream":"stdout","docker":{"container_id":"0acbffc89a020e048c26caa10da869607ddfbf506e3cb9014b2d7b54f3b84a5d"},"kubernetes":{"container_name":"mixer","namespace_name":".orphaned","pod_name":"istio-telemetry-79c7f498fb-kbhr2","orphaned_namespace":"istio-system","namespace_id":"orphaned"}}

I'm fairly new to kube, so I'm not sure if this is something to expect or not. There's very very little documentation anywhere about "orphaned namespaces".

My functioning istio-system containers are 3 days old, and this orphaned traffic started within the last 24 hours and hasn't stopped this logging, so it must still be there somewhere. I don't know how to get rid of it, it certainly hurts my log volume.

Maybe I'm hitting an edge condition, but how do I get rid of this? I can filter out the logging no problem, but I'm concerned it's eating up resources in my cluster.

@logicalhan
Copy link
Member

/reopen

@k8s-ci-robot
Copy link
Contributor

@logicalhan: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Mar 12, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alizdavoodi
Copy link

Hey,
We're experiencing the exact issue mentioned here:
Cluster version: 1.21
We deleted a namespace, and it seems the namespace itself has successfully deleted,
(we had to turn off the "finalizer" in order to delete the namespace)

> k get ns arangodb
Error from server (NotFound): namespaces "arangodb" not found

But the pods/pvcs inside this namespace are stuck at the terminating phase,

 k -n arangodb get po -o wide
NAME                                                READY   STATUS        RESTARTS   AGE     IP              NODE                                             NOMINATED NODE   READINESS GATES
acc-arangodb-cluster-agnt-8ekz9d9y-a0e89b           0/1     Terminating   0              <none>           <none>
acc-arangodb-cluster-agnt-bwqwdnii-a0e89b           0/1     Terminating   0             <none>           <none>
acc-arangodb-cluster-agnt-dignjh4o-a0e89b           0/1     Terminating   0             <none>           <none>
acc-arangodb-cluster-prmr-ago5vxvf-a0e89b           0/1     Terminating   0             <none>           <none>
acc-arangodb-cluster-prmr-jrtfk9g5-a0e89b           0/1     Terminating   0             <none>           <none>
pre-upgrade-arangodb-cluster-agnt-fmfzytzb-ab3b5b   0/1     Terminating   0             <none>           <none>
pre-upgrade-arangodb-cluster-agnt-io4fsin7-ab3b5b   0/1     Terminating   0              <none>           <none>
pre-upgrade-arangodb-cluster-agnt-oq7itwew-ab3b5b   0/1     Terminating   0             <none>           <none>
pre-upgrade-arangodb-cluster-prmr-qng5l3xg-ab3b5b   0/1     Terminating   0              <none>           <none>
pre-upgrade-arangodb-cluster-prmr-rgh6fqq3-ab3b5b   0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-agnt-3t84z7a1-d1e132          0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-agnt-khjtyqje-d1e132          0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-agnt-ktx0uw8a-d1e132          0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-prmr-600o5sou-fb74cc          0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-prmr-8ntkvxhv-fb74cc          0/1     Terminating   0             <none>           <none>
prod-arangodb-cluster-prmr-nojlvuwv-fb74cc          0/1     Terminating   0             <none>           <none>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/apiserver area/HA lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests