-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deleting a namespace sometimes leaves orphaned deployment and pods which can't be deleted #36891
Comments
do you know what admission plugins you have enabled, and if you have the NamespaceLifecycle plugin enabled? |
I've seen this too in my test cluster. I'll have to verify what's enabled on my api server. |
can you grab the yaml of the stuck objects? do they have finalizers or ownerRefs set in their metadata? |
@caesarxuchao any idea if garbage collection could make the namespace controller think it had cleaned everything up, but still leave objects existing? |
@liggitt : if this is a non-release-blocker, please add that label :) |
if it's recreatable, it's a release blocker |
My apiserver (1.4.4) has the following plugins I don't have a reproducible but know I have seen it before |
Mine's been configured by
|
I've attempted to export my pods, deployments and namespaces using the following command |
@wallrj in the title you said deployments were left behind, but the pasted output only showed pods. Could you confirm if the deployment objects were left behind? |
⬆️ See zip file attached above. |
Thanks. The deployment that got left behind:
Looks like it's not caused by the garbage collector. @liggitt are there other suspects? |
The left-behind pod doesn't have a deletion timestamp:
Could this be a race? Can user create an object when the namespace is in the terminating state? @derekwaynecarr |
Sounds plausible. My tests originally created a namespace on |
just to double check, single etcd server, single apiserver? |
I think so...how can I check?
|
looks like an etcd cluster of 3, and a single apiserver edit: nm, missed that was three |
My environment is 3 etcd and 3 apiservers |
@liggitt if it is indeed reproducible, please add the release-blocker label |
i acknowledge that we will have this discussion again in 4 mos, and we will discuss if the presence of user api servers makes this better/worse ;-) |
We'll be too busy fixing distributed quota bugs. Oh, and acl races on
getting access to a namespace
On Mar 15, 2017, at 11:45 AM, Derek Carr <notifications@github.com> wrote:
i acknowledge that we will have this discussion again in 4 mos, and we will
discuss if the presence of user api servers makes this better/worse ;-)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#36891 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p7qOQZX1fAvK6yg-DxtYVgOPgiqVks5rmAeRgaJpZM4Kz47K>
.
|
In or out for 1.7? |
I don't really see any new options for 1.7 |
Oh wow. That was you?! |
You were there too, you've just blocked it out. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is happening to me, but I don't recall removing the istio-system namespace when it started, and that's apparently the namespace that got orphaned. The only reason I found it was because it started showing up in my log aggregation, e.g.:
I'm fairly new to kube, so I'm not sure if this is something to expect or not. There's very very little documentation anywhere about "orphaned namespaces". My functioning istio-system containers are 3 days old, and this orphaned traffic started within the last 24 hours and hasn't stopped this logging, so it must still be there somewhere. I don't know how to get rid of it, it certainly hurts my log volume. Maybe I'm hitting an edge condition, but how do I get rid of this? I can filter out the logging no problem, but I'm concerned it's eating up resources in my cluster. |
/reopen |
@logicalhan: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey,
But the pods/pvcs inside this namespace are stuck at the terminating phase,
|
I've got my kubernetes cluster into a state where there are pods and a deployment that belong to a deleted namespace.
I deleted the namespace (by calling the REST API directly).
And I can't delete the orphaned resources.
Cloud: Google Cloud Platform
Installed according to
kubeadm
documentation http://kubernetes.io/docs/getting-started-guides/kubeadm/The text was updated successfully, but these errors were encountered: