-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod or Job lifecycle lacks of a mechanism to define cleanup actions once you delete a pod #35183
Comments
@hectorj2f did you look at Termination of Pods, particularly step 5? |
We've hit this when looking at how to do image builds using a container. Ideally, we'd run a container in the pod to set the image state, then commit and push that image in a follow on. But it requires a more complex lifecycle than our current flow. We had concerns about adding post-containers or post stop hooks when we discussed init containers. |
@smarterclayton Which are those concerns ? With the current lifecycle either, you increase the container functionality to cleanup itself when calling the What is it the blocker of this new feature for the pod lifecycle ? @smarterclayton I'd like to help if it is necessary to push this issue. |
Lifecycle hooks for different objects (eg. Deployments) discussed elsewhere: Also custom strategies are related to this - having the ability to customize the lifecycle of X means you can add your own hooks anywhere you want in the process. See #33545 (comment) for more information. |
@kubernetes/sig-apps |
It sounds like what you are looking for is analogous to an init cotainer except that it runs after the pod is shutdown. Would a feature like this, lets call it a de-init container, be suitable to your requirements? |
I think that having something similar to init-containers to be launched when the pod is terminated. It could make sense. |
We had hesitations around adding finalizing containers when designing init
containers. Allowing long strings of finalizers adds an additional state
machine component to the pod that is new (and hard to make backwards
compatible). There is also a serious desire to avoid anything close to a
graph of tasks.
Use case wise, I've heard:
1. Release resources acquired during init (global locks, API registration)
2. Upload contents from shared volumes (job logs, build artifacts)
3. Commit one of the app containers and push it (image build)
4. Signal job finalization to arbitrary entities.
So "outputs" and "signal", primarily. What others are relevant? There's
also the case that the caller might want more info from the pods. What
guarantees are required for these use cases?
Slightly deeper thought:
Could we translate this into a container that receives a hook execution /
signal when all other containers terminate?
Pod
Spec
Containers
- OnShutdown:
Exec /bin/cleanup.sh
(LifecycleHook)
Adding a new post container could be unnecessary if a lifecycle hook was
sufficient, although this does seem to fall afoul of the "no graph concepts
of tasks" rules
On Dec 15, 2016, at 5:24 PM, Hector Fernandez <notifications@github.com> wrote:
I think that having something similar to init-containers to be launched
when the pod is terminated. It could make sense.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#35183 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p4nIjtdwNMcNTgMHZdcXqH05IvIWks5rIb4WgaJpZM4Kb50I>
.
|
I understand that adding
On the other hand, if we add a hook when deleting a deployment, my main concern is to able to use containers that are different than the containers that run in the Pod of the deployment as an example. If so, I'd be happy with your approach but I believe with your solution. We aren't able to run any logic which is outside of the own container like it happens with the current Termination hooks. Doing so, it'd require to:
|
I can imagine some complications around that area. How are we going to solve this problem:
Second problem:
|
I have basically the same requirement, but for DaemonSet: I work on Weave Net which installs itself on every node via a DaemonSet. The install creates some side-effects on each node - network virtual devices, etc. - that a user would like to clear down if they decide to uninstall. Users variously expect this to happen via We can't do an uninstall on a simple pod stop or delete - this will happen when the software is being upgraded, and destroying the pod network is very bad UX for an upgrade. We really need to know that the entire DaemonSet is being deleted. @pigmej makes some good points, but it seems to me that users would understand that the code made some efforts to clean up, rather than no effort at all at present. |
@bboreham |
No. It is because the code to remove a node was removed in #42713, @cheburakshu |
@bboreham Will the master still schedule the node since it sees it? In my experience it did and the pod was in a pending state. Is it a correct behaviour? |
@cheburakshu I dare say it will; however that is off-topic for this issue. Per #42713 the system administrator has to remove the node. Suggest you open an issue against kubeadm if this doesn't suit. |
I have the same requirement. CNI plugins typically install files (binaries & a config file in I'm doing the installing part using a In case this is not guaranteed, a naive thought is to consider a |
Confirmed that this is not guaranteed. If I sleep in the |
Hi @bboreham, @tgraf , @cheburakshu, could you please take a look at this proposal and provide more feedback it appears to me that we could achieve these using deferContainers, this will not only allow us to isolate termination scripts from the main image but also provide more guarantee and control than preStop hooks. |
@dhillipkumars I couldn't see how that would allow me to know that the entire DaemonSet is being deleted. Conversely I don't seem to need additional isolation. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
How about using a finalizer? In the case of a Pod, it would need permission to write to its own object, which is unorthodox. In the case of a DaemonSet the pods could all add themselves as finalizers to the DaemonSet, but that gets out of hand at scale. Add another pod to coordinate shutdown (an operator, if you like), it feels like it could work. |
Any update? I have the similar requirement now. I want a hook to do something when a job is deleted (or a pod is terminated). I tried to use the |
this proposal kubernetes/enhancements#1995 seems to handling this use case, please review it and see if that is in the right direction as per this issue |
@krmayankk can you point more precisely to how kubernetes/enhancements#1995 would notify when a pod, deployment, daemonset, etc., is being deleted? I think it could be extended to cover the case described here, by defining system-triggered notifications, but all I can see in the proposal as it stands is user-triggered. |
/kind feature |
I don't think we're going to implement new "do this when a pod is deleted" hooks in the core system any time soon. kubernetes/enhancements#1995 is not what you want here, it's what bboreham described - user-triggered. If someone wants to think up a general event-integration, it should at least start as an out-of core implementation. |
Our user story for a FEATURE REQUEST is the following: we have been developing distributed applications using systemd during long time. To coordinate them, we used fleet as distributed system to manage them all in our infra. However, we decided to move all our systems to Kubernetes since several months.
Regardless of all the amazing features you can find in Kubernetes when defining applications. We are missing a good one, we had when creating systemd units.
We research among all the features and we couldn't find any available mechanism to trigger delete-or-cleanup actions. As an example, I have a pod that does some stuff (like adding a key to etcd, setting some iptable rules) when I delete the pod k8s doesn't provide a way to trigger an action (like removing the key from etcd, cleaning up all the iptable rules). So, we have to create some additional pods that do the jobs which is ugly from an app lifecycle definition.
Therefore, I'd like to know or learn if there would be any possibility to get this functionality inside kubernetes. Also, if you know of any third party project trying to achieve the same goal, it'd be really helpful for us.
The text was updated successfully, but these errors were encountered: