-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: A way to signal pods #24957
Comments
We can just use config map as the kube-event-bus. It's what I was going to do for petset. I need to notify petset peers about changes in cluster membership. The idea is people will write "on-change" hook handlers, the petset will create a config map and write to it anytime it creates a new pet. The kubelet runs a handler exec/http-probe style, on every change to the config map. I'd rather have a hook than an explicity signal because I don't need a proxy to listen for the signal and reload the pet. In fact to do this reliably I need a handler, because I need to poll till the pet shows up in DNS. I vastly prefer this to running a pid 1 that is cluster aware (autopilot strategy). |
I opened this as a separate PR because I have been asked several times for On Fri, Apr 29, 2016 at 2:32 PM, Prashanth B notifications@github.com
|
You won't need a sidecar, you'll need a shell script that sends |
You're missing the point. There are other things people want to do that On Fri, Apr 29, 2016 at 3:27 PM, Prashanth B notifications@github.com
|
That's not part of what i want. I explicitly don't want a long running process aware of the config map.
I think this is trivial. Not bash, sh or ash. Or http. The same things you need for a probe, or a post start hook, or pre stop hook. concepts we already have. In short I want a way to tell the container about a reconfigure event and I don't want to teach all databases to handle a signal properly. |
And I'm trying not to end up with 2 overlapping concepts when you can achieve one with the other. Somehow I dont think people will be against a probe like thing instead of a straight signal. |
It feels like a hack to me. I'd be fine with notifiers, and with notifiers On Fri, Apr 29, 2016 at 10:55 PM, Prashanth B notifications@github.com
|
Hmm, my feature request also doesn't require a config map, it's the common case. Maybe a notifier of type=probe,exec,signal? |
Regarding signals and restarts, is kill the right signal, or do users
want to restart the pod itself? I fee like the granularity on signals
is processes (of which container is a relatively good proxy), but the
granularity on restart / exec actions is containers or pods.
|
I could see an argument for "restart" as a verb on pods (albeit with he
|
Is "restart" a synthetic signal then? If we add exec it's a little
wierd because we would potentially have to secure exec differently
than signal (since it could be running with higher priviliges) - do
pods authors define signals the container supports (which is more like
invokable hooks), or are they arbitrary (can I find crazy posix signal
X that causes the machine to crash)?
|
I'm still kind of stuck at the point where we're special casing a specific form of linux IPC. I really want a cluster notification system, the last hop of which is some implementation of a node local ipc, i.e, say i have a pod with:
and I write "foo" to stdin, the kubelet will deliver that at least once, to the one container with the hook. I then observe generation number and know that my container has received
or
And I still write "" to the stdin field, and poll on generation. Is that a different feature request? |
I kind of agree signal is a subset of "pod notification". The security On Sat, Apr 30, 2016 at 3:11 PM, Prashanth B notifications@github.com
|
I would like to have a concrete way to force and guarantee a restart of a On Sat, Apr 30, 2016 at 3:15 PM, Clayton Coleman ccoleman@redhat.com
|
So all containers BUT pause need to restart? (otherwise why not just delete the pod?) |
it should be optional and probably support rollout restart, it's too dangerous to restart everything right a way. |
If it's too dangerous to restart containers in a pod at the same time, put them in different pods/services? Everything else in the system will treat the pod as a unit, so there's probably no avoiding this. Or are you asking for a dependency chain of restarts on containers in a pod? |
Classic cases:
In all of these cases the invariant enforced by pod deletion is excessive. |
@bprashanth: for example we have replication controller which has N of replicas, if we are going to restart all of them at the same time (e.g. we rollout new config change) we will affect customer traffic. |
More use cases: some software can be told to increase logging or debugging verbosity with SIGUSR1 and SIGUSR2, to rotate logs, open diagnostic ports, etc. |
I also view this as notification. I would prefer to expose an intentional API instead of something in terms of POSIX signals. I would take two actions to start with:
The problems with an intentional API are:
|
Also, for the record, seems plausible that we would constrain the list of signals you could send in an API using |
I have no strong preference about "intentional API vs POSIX" problem. The only proposition I'm opposed is using any king of exec or scripts as a workaround. Docker has @pmorie sorry for my ignorance, but what exact non-Linux platforms we have to care about in k8s? I had no luck with googling/greping that information. |
There is ongoing work to bring Kubernetes to Windows. We need to be
aware of how we would address differences in those mappings.
|
For the record we should be able to exec scripts from sidecars for notification delivery with shared pidnamespaces, which is coming in docker 1.12 apparently (#1615) |
@smarterclayton OK, so in that case intentional API sounds better. So, for now we're going to implement the
? Also, as far as I understand, this event bus is something which we have to implement? Sorry if I'm missing some already existing feature. If I understand correctly the API part, then I'm eager to implement this. |
it is a useful way for us to control process life inside the application container in the same pod. |
Came here because I was looking for a solution to have pods restarting when dependent secrets change without complicated scripts that will break in some not so distant future. I don't like the sidecar solution becasue in a microservice architecture this would requre tons of resources (energy) wasted just to watch for config or secret updates. |
The best solution for microservice architecture may be to teach the service to watch for secrets and restart without delegating this responsibility to infrastructure (Kubernetes), and/or call restart of other service via API. Depending on process signal is kind of old view for this task, use service's API instead. |
That's how it is being already done. But it's alwazs a custom implementation. It would save time, resources, and potential bughunting sessions when this could be delivered as an out of the box functionality. |
You could now use an ephemeral container for this, I think, or from a sidecar, if a Pod uses |
I think if I were making any change here, it'd be to allow a Pod to define which containers can share a process namespace. I might want to use an ephemeral container to HUP the log sidecar, but keep the main app container isolated. |
Ephemeral container for such a simple thing as sending a signal seems like a overkill to me. |
Friendly reminder about a key etcd feature
I think this is the exactly right way to handle this requirement. |
kubernetes/enhancements#1977 is what I want . It's somewhat involved, and I would love to hear simplifications if anyone has them. Mostly it needs an owner. |
@thockin - how about we close this and signpost anyone following this issue to look at kubernetes/enhancements#1977? We could even mention this issue in the KEP itself for background reading, or under “alternatives considered”. |
SGTM - we don't really document the relationship between k/k issues and
KEPs, but I can't see we what we need both at the same time.
/close
…On Thu, Dec 22, 2022, 12:54 AM Tim Bannister ***@***.***> wrote:
@thockin <https://github.com/thockin> - how about we close this and
signpost anyone following *this* issue to look at
kubernetes/enhancements#1977
<kubernetes/enhancements#1977>?
We could even mention this issue in the KEP itself for background reading,
or under “alternatives considered”.
—
Reply to this email directly, view it on GitHub
<#24957 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVBAUZBBE6GZI4GR77DWOQJMJANCNFSM4CCJ34CA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@thockin: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is awful! If you read this thread then you will realize that we asked for a way to automatically restart containers if config maps or secrets change. The idea is that the container running a specific workload is not aware of running in Kubernetes. Such a restart should happen without running an extra Sidecar or Opearator. It's a simple requirement that would make day-2-day operations much easier. Not sure why this is so hard to understand! |
This issue is specifically about sending a signal to pods from outside. Somewhere the idea of configmap changes got mixed in. I don't have a KEP in my pocket for that topic specifically (kubernetes/enhancements#948 is maybe related?), but if someone wants to discuss a proposal for it, I am open to the topic. |
It would be quite an easy task to make a small service, that watches k8s events for configmaps, secrets etc. Whenever they change, it will just do a rollout restart of the services involved with that configmap/secret/whatever. |
@artheus unless i am misunderstanding your proposal there is already a popular project that does just that - https://github.com/stakater/Reloader |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
kubernetes/enhancements#1977 and kubernetes/enhancements#1995 propose a design for this but need an owner. |
This has come up a bunch of times in conversations. The idea is that some external orchestration is being performed and the user needs to signal (SIGHUP usually, but arbitrary) pods. Sometimes this is related to ConfigMap (#22368) and sometimes not. This also can be used to "bounce" pods.
We can't currently signal across containers in a pod, so any sort of sidecar is out for now.
We can do it by
docker kill
, but something likekubectl signal
is clumsy - it's not clearly an API operation (unless we revisit'operation' constructs for managing imperative async actions).
Another issue is that a pod doesn't really exist - do we signal one container? Which one? Maybe all containers? Or do we nominate a signalee in the pod spec?
The text was updated successfully, but these errors were encountered: