-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-1977: Add KEP for ContainerNotifier #1995
Conversation
CC @yuxiangqian |
/assign @thockin @saad-ali @liggitt |
/assgin @sjenning @derekwaynecarr @dchen1107 |
// ContainerNotifier describes a command to run in the container. | ||
type ContainerNotifierAction struct { | ||
// handler describes how this notifier is delivered to the container. | ||
Handler ContainerNotifierHandler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nesting-wise this is a bit weird.
Pod {
Spec: {
Containers: [{
Notifiers: [{
Name: foo
Action: {
Handler: {
Exec: {
Command: bar
}
TimeoutSeconds: 10
}
}]
}]
}
}
I think we can get rid of one level by moving timeout to ContainerNotifier:
Pod {
Spec: {
Containers: [{
Notifiers: [{
Name: foo
TimeoutSeconds: 10
Action: {
Exec: {
Command: bar
}
}
}]
}]
}
}
I know this is trying to look like Probe
, but that embeds Handler, which is sort of problematic (@apelisse re one-of) I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
|
||
NotificationStatus represents the current state of a Notification. It has a list of PodNotificationStatuses. | ||
|
||
A PodNotificationStatus represents the current state of a Notification for a specific pod. A PodNotificationStatus contains ContainerNotificationStatuses of all the containers which have the corresponding ContainerNotifier specified. PodNotificationStatus is created when a Notifier starts to run in a pod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be a problem. Imagine a notifier that selects 1000 pods. That's not implausible. Even 5000 is believable. That leaves you just a few hundred bytes per pod for status, before it starts failing (too big). Worse, it will fail in non-deterministic ways in practice.
I think we have a few choices here, and we should seek a pattern precedent (@liggitt for consideration)
-
Notification.status is a "summary". The real status is stored in another new resource
NotificationResult
which is selected by something like "notification.kubernetes.io/uid=". Each NotificationResult holds the status for one single pod-notification. Need to define the lifecycle management of this so we don't leave them around for too long. -
Notification.status is a "summary". The real status is stored in each pod, e.g. in
status.notifications[]
which would be a list of "recent" notifications and the per-container statuses -
other ideas?
TL;DR: we can't have (practically) unbounded lists in a single resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll wait for more suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any existing examples that I could follow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EndpointSlice is similar - I am not sure there's a closer example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NotificationStatus represents the current state of a Notification. It has a list of PodNotificationStatuses.
when we talked about this in doc form, I thought notification was scoped to a single pod in order to allow notifying in waves and avoid unbounded status accumulation ... did that approach get revised since then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So that means we can't have a PodSelector in the NotificationSpec. It will just be a PodName instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thockin would you consider having one to one mapping between a Notification and a Pod (having a PodName in NotificationSpec)? That would simplify status update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just read the discussion in the doc, I think the notification per pod makes most sense and seems safer. Also the idea of building on top using a notification controller for safe rollout of notification is even better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go to 1:1, do we need a higher-level API that selects pods and expands a notification for each of them?or should we add that to kubectl? or should we just say "someone else's problem"?
If we go 1:1 does a subresource make more sense? It gives even finer RBAC control, but we need the result to be a new resource (for status). E.g. POST to /pods/foobar/notify -> creates a PodNotifier; kubelet watches PodNotifiers. Any precedent for that? Any need to get finer-grained than per-namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the subresource idea which i was hinting here https://github.com/kubernetes/enhancements/pull/1995/files#r493239279 . I guess i don't follow why it should result in a new PodNotifier being created? For e.g. does the POST on a log or the exec sub resource result in a new object ? The main reason i like this is its less number of new objects to manage , less strain on etcd, since the notifications are an array. It does make the RBAC more fine grained. Thinking out loud, if i am not mistaken, you can say a service account can or cannot exec(subresource: pods/exec) to any pods in a namespace by specifying a RoleBinding referencing a Role, So similar should be possible with this notify subresource
All containers in a selected pod which have a "ContainerNotifier" defined in their spec with the "ContainerNotifierName" will have their corresponding "ContainerNotifierAction" executed. Moreover, there is no guarantee of ordering on execution. | ||
|
||
``` | ||
type Notification struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to define what happens in this case:
time 0: pods A, B, C are running, labelled foo=bar
time 1: create Notification selecting foo=bar
time 2: all notifiers in A, B, C are done
time 3: pod D is created, labelled foo=bar
*** Do we run the notifier on D? If so, for how long does that apply? If not, how do we prevent it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have said that Kubelet (container notifier controller) will not do retries. So if that is the case, Kubelet (container notifier controller) will only process the pods matching the label when the Notification object is first created? If so, Pod D will not be processed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a namespaced object ? Once its created, it should apply to everything until the api object is deleted .
Is this more of an imperative api or we want it to be declarative (i guess the latter )?
i am thinking what does the declaration mean ?
- notify all containers that match this selector at this time t
- let there be notification for all pods matching this selector as long this api exists
- i want all containers of the pods matching a selector to be quiesced (this seems more natural)
its a bit confusing to think since the lifecycle hooks are tied to the lifecycle of the container/pod and are ok to be imperative, but this api is not tied to the lifecycle but more of a command to the containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krmayankk is on the issue.
This is a pseudo-imperative operation.
Kubelet (container notifier controller) will only process the pods matching the label when the Notification object is first created
That seems reasonable at first blush, but what is the mechanism for that? Is the rule "only applies to pods whose metadata.creationTimestamp
is less than the Notification's metadata.creationTimestamp
? If so, let's document it. Also - less-than or less-than-or-equal?
Does this satisfy the higher-order controller's needs? Is it plausible that a quiesce notification could be created, a snapshot started, then a new pod shows up which is not quiesced? Even if Notifications applied to it, that would be async, so it still seems bad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thockin using CreationTimestamp can certainly be one option to rule out newly created pods, it does not cover the case when a pod is evicted/died during hook execution. From quiescing requirements perspective, its critical to have a point-in-time "fixed" scope for snapshotting, thus if a pod died during the notification, we might want to fail the the whole notification.
we have a couple of other potential options here:
- make notification to pod one-to-one by replacing podSelector with a pod definition. higher level controller like application snapshot controller then can have a pre-selected list and create one notification to each desired pod.
Advantage:
- simpler API
- ordering of notification is possible and more convenient to achieve for higher level controllers
disadvantages:
a. probably does not fit well with the signal case where multiple pods need to be notified without a higher level controller
- introducing a list of resolved pod in Notification.Spec which will be immutable once resolved. Pod selection is done prior to any real execution of notification. And that list would be the source of truth for the scope of execution. This might cause issue when the resolved list is too big(~k).
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting problem. If a pod dies mid-snapshot, do you fail the whole snapshot? If a new pod a starts mid-snapshot (scale-up?) does it need to quiesce? Maybe that just "doesn't happen" ?
Your (1) happens to ALSO solve the status cardinality problem, so is attractive in that regard. It means the scale of things kubelet has to watch just got much bigger, though, unless we set labels to scope it (e.g. set the node-name as a label, let kubelet watch only that).
But it is ugly for humans.
Another approach would be to define Notifications as applying to all pods. In the example above it would apply to D. And E, F, G etc until the Notification is removed. All the controller (or kubelet) could guarantee would be that every Notification that selects a given pod would be attempted once, and if there were multiple notifications, they would be attempted in some deterministic order (e.g. creationTimestamp or RV).
So to expand the example:
t0: pods A, B, C are running, labelled foo=bar
t1: create Notification N1 selecting foo=bar
t2: N1 notifiers in A, B, C are done
t3: pod D is created, labelled foo=bar
t4: N1 notifier in D is done
t5: create Notification N2 selecting foo=bar
t6: N2 notifiers in A, B, C, D are done
t7: pod E is created, labelled foo=bar
t8: N1 notifier in E is done
t9: N2 notifier in E is done
t10: delete Notification N1
t11: pod F is created, labelled foo=bar
t12: N2 notifier in F is done ## note N1 was not executed
t13: delete Notification N2
This seems maybe workable, but I need you to look at it from your use-cases. Bear in mind that rapid sequences can be observed out of order by watchers, so if your plan would be to delete N1 then create N2 you need some time in between to know that all kubelets have observed the deletion (or ensure that it is "safe" to trigger N1 on new pods). Similarly, if you want to clean up N1 and N2 you must delete them in order with sufficient time between to ensure no out-of-order deletion problems.
We don't have a "serializing barrier" operation.
Regarding "ugly for humans", maybe we need both? Individual pod-notifications (and status) which controllers can orchestrate PLUS a selector-notification which a built-in controller would expand into pod-notifications? This represents a broader API change, but is the more flexible model.
Thoughts?
Regarding your (2) - does that satisfy requirements? Let me see if I get it right:
t0: pods A, B, C are running, labelled foo=bar
t1: create Notification N1 selecting foo=bar
t2: a controller evaluates foo=bar one time and notes the result (A, B, C)
t3: N1 notifiers in A, B, C are done
t4: pod D is created, labelled foo=bar
t5: N1 notifier in D is NOT executed
Is that right? How does that map to the use-cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applying to all pods for as long as the notification exists makes more sense from an api point of view. Also including the pod definition would be messy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krmayankk is on the issue.
??
This is a pseudo-imperative operation.
Is that ok to have ? Do we need to update any apiconvention guidelines on when a pseudo imperative api makes more sense ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the quiesce use case, a higher level backup controller needs to decide what pods are included. When a backup is started, the backup controller needs to figure out all the pods that need to get the quiesce command. It can retrieve this information using a PodSelector. After that it is a fixed list of Pods that need to be quiesced for this particular backup. Therefore, we can’t have PodSelector in Notification. It should be either a List of Pods that is immutable or a Pod name for a single pod, basically 2 and 1 suggested by Xiangqian.
If we decide to have a PodList in NotificationSpec, the steps would be like this:
- An external Backup controller determines what pods need to receive the notification by using a PodSelector (this depends on backup controller logic).
- Backup controller creates a Notification object with a PodList with the selected pods.
This way we have a concrete list of Pods. If quiesce fails for any pod/container, it should be treated as failure for the whole quiesce operation. Typically the backup software gives user an option to complete a backup that is not quiesced successfully or fail it if any quiesce action fails. If user chooses to do a backup even if quiesce fails, the backup controller will trigger a un-quiesce command to undo anything that is already quiesced but skip doing quiesce for the remaining pod/container. Then the backup controller will take a snapshot and mark that snapshot as not-quiesced. User can decide what to do with a snapshot that is not quiesced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can’t have PodSelector in Notification
I'm not sure I buy that. This API needs to be general enough to use by humans. We need to be clear about what happens if a new pod is created that satisfies the criteria. Don't over-focus on just quiesce
04edd07
to
4e18bcd
Compare
// +optional | ||
// define constants for signals? | ||
// validate the signals are valid? windows? | ||
++ Signal string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to start with signal in phase 1 and then add exec later if that is necessary ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Support for signal requires changes to CRI. That means it can only happen in Kubelet. Since we are starting with a separate controller, we can't start with signal.
Also support for exec is an important use case (the quiesce use case for application consistent snapshot). So support for exec should be added in phase 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not go GA (and maybe not beta) without phase 2, but it is phased for a reason.
|
||
### Kubelet Impact in Phase 2 and Beyond | ||
|
||
When moving the logic from the container notifier controller to Kubelet, there will be impact on Kubelet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its not clear from this proposal why the controller is starting with a separate controller(also not clear if its separate or part of one of the existing controllers) and then later moving to kubelet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will start as a separate controller in a new repo, and later on the logic will be moved to kubelet.
This is because there are concerns from sig-node on potential impact on kubelet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An abundance of caution
|
||
### Phase 1 | ||
|
||
Phase 1 API definition will be added to Kubernetes. Implementation in this phase will not happen in Kubelet. Instead, it will be implemented in a single *trusted* controller which acts on Notifications via exec (with the goal being to move that into Kubelet in the next steps). In this phase, only exec will be included and signal will not be included as that involves changes to CRI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so a separate external controller running per node (daemon set) ? In order to run the exec commands , will it exec into the pods ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not needed per-node, just per-cluster
CC @andrewsykim |
} | ||
|
||
// ContainerNotifier describes a command to run in the container. | ||
type ContainerNotifierAction struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was proposing that we get rid of this layer of nesting:
Pod {
Spec: {
Containers: [{
Notifiers: [{
Name: foo
TimeoutSeconds: 10
Action: { # note no "handler"
Exec: {
Command: bar
}
}
}]
}]
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, agreed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping
All containers in a selected pod which have a "ContainerNotifier" defined in their spec with the "ContainerNotifierName" will have their corresponding "ContainerNotifierAction" executed. Moreover, there is no guarantee of ordering on execution. | ||
|
||
``` | ||
type Notification struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krmayankk is on the issue.
This is a pseudo-imperative operation.
Kubelet (container notifier controller) will only process the pods matching the label when the Notification object is first created
That seems reasonable at first blush, but what is the mechanism for that? Is the rule "only applies to pods whose metadata.creationTimestamp
is less than the Notification's metadata.creationTimestamp
? If so, let's document it. Also - less-than or less-than-or-equal?
Does this satisfy the higher-order controller's needs? Is it plausible that a quiesce notification could be created, a snapshot started, then a new pod shows up which is not quiesced? Even if Notifications applied to it, that would be async, so it still seems bad?
// Names are label-style, with an optional prefix. Names without the prefix | ||
// such as `quiesce` are reserved for Kubernetes-defined "well-known" names. | ||
// Names with a prefix such as `example.com/label-style` are custom names. | ||
ContainerNotifierName string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: This is a needlessly long name. Just Notifier
or NotifierName
perhaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to think about who can send notifications ? What permissions they need on a pod ?. Since the controller is doing exec, does it mean as long as exec is opened for the controller, anyone can create and run the commands using controller as a proxy?
Another idea is should we make notifier a subresource ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RBAC governs who can create Notifications in a namepace - isn't that enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
NotificationStatus represents the current state of a Notification. It has a list of PodNotificationStatuses. | ||
|
||
A PodNotificationStatus represents the current state of a Notification for a specific pod. A PodNotificationStatus contains ContainerNotificationStatuses of all the containers which have the corresponding ContainerNotifier specified. PodNotificationStatus is created when a Notifier starts to run in a pod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EndpointSlice is similar - I am not sure there's a closer example.
// +optional | ||
// define constants for signals? | ||
// validate the signals are valid? windows? | ||
++ Signal string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not go GA (and maybe not beta) without phase 2, but it is phased for a reason.
|
||
### Kubelet Impact in Phase 2 and Beyond | ||
|
||
When moving the logic from the container notifier controller to Kubelet, there will be impact on Kubelet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An abundance of caution
|
||
### Phase 1 | ||
|
||
Phase 1 API definition will be added to Kubernetes. Implementation in this phase will not happen in Kubelet. Instead, it will be implemented in a single *trusted* controller which acts on Notifications via exec (with the goal being to move that into Kubelet in the next steps). In this phase, only exec will be included and signal will not be included as that involves changes to CRI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not needed per-node, just per-cluster
# The milestone at which this feature was, or is targeted to be, at each stage. | ||
milestone: | ||
alpha: "v1.20" | ||
beta: "v1.21" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is too aggressive given that the amount of work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These milestones need to be updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corner cases are so much fun
All containers in a selected pod which have a "ContainerNotifier" defined in their spec with the "ContainerNotifierName" will have their corresponding "ContainerNotifierAction" executed. Moreover, there is no guarantee of ordering on execution. | ||
|
||
``` | ||
type Notification struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting problem. If a pod dies mid-snapshot, do you fail the whole snapshot? If a new pod a starts mid-snapshot (scale-up?) does it need to quiesce? Maybe that just "doesn't happen" ?
Your (1) happens to ALSO solve the status cardinality problem, so is attractive in that regard. It means the scale of things kubelet has to watch just got much bigger, though, unless we set labels to scope it (e.g. set the node-name as a label, let kubelet watch only that).
But it is ugly for humans.
Another approach would be to define Notifications as applying to all pods. In the example above it would apply to D. And E, F, G etc until the Notification is removed. All the controller (or kubelet) could guarantee would be that every Notification that selects a given pod would be attempted once, and if there were multiple notifications, they would be attempted in some deterministic order (e.g. creationTimestamp or RV).
So to expand the example:
t0: pods A, B, C are running, labelled foo=bar
t1: create Notification N1 selecting foo=bar
t2: N1 notifiers in A, B, C are done
t3: pod D is created, labelled foo=bar
t4: N1 notifier in D is done
t5: create Notification N2 selecting foo=bar
t6: N2 notifiers in A, B, C, D are done
t7: pod E is created, labelled foo=bar
t8: N1 notifier in E is done
t9: N2 notifier in E is done
t10: delete Notification N1
t11: pod F is created, labelled foo=bar
t12: N2 notifier in F is done ## note N1 was not executed
t13: delete Notification N2
This seems maybe workable, but I need you to look at it from your use-cases. Bear in mind that rapid sequences can be observed out of order by watchers, so if your plan would be to delete N1 then create N2 you need some time in between to know that all kubelets have observed the deletion (or ensure that it is "safe" to trigger N1 on new pods). Similarly, if you want to clean up N1 and N2 you must delete them in order with sufficient time between to ensure no out-of-order deletion problems.
We don't have a "serializing barrier" operation.
Regarding "ugly for humans", maybe we need both? Individual pod-notifications (and status) which controllers can orchestrate PLUS a selector-notification which a built-in controller would expand into pod-notifications? This represents a broader API change, but is the more flexible model.
Thoughts?
Regarding your (2) - does that satisfy requirements? Let me see if I get it right:
t0: pods A, B, C are running, labelled foo=bar
t1: create Notification N1 selecting foo=bar
t2: a controller evaluates foo=bar one time and notes the result (A, B, C)
t3: N1 notifiers in A, B, C are done
t4: pod D is created, labelled foo=bar
t5: N1 notifier in D is NOT executed
Is that right? How does that map to the use-cases?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
BTW, kubernetes/kubernetes#24957 has some relevant, related discussion. |
That's a good idea - @xing-yang any objections? |
@thockin Sure, let's try that. Thanks. |
Can you make the proposed change and drop the PRR file? Then we can merge as provisional. |
1. add ttl, PodUid, ContainerID 2. clarification on InitContainers/EphemeralContainers in API 3. clarification on PodPhase 4. clarification on Container State
3f79708
to
49c82cb
Compare
49c82cb
to
96498ec
Compare
Hi @thockin, I've dropped the PRR file and changed the status to "provisional". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ruiwen-zhao, thockin, xing-yang The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@dchen1107 can you approve this as "provisional" ? Nobody is working on it but I don't want to lose state. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
I'm just going to close this and if anyone ever gets time, we can reopen |
Enhancement issue: #1977