-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: versioned DaemonSet #42917
Comments
I don't think the D->RS abstraction applies to what you are asking for because Deployments manage upgrades for ReplicaSets whereas DaemonSets manage their own upgrades. This issue sounds like an A/B deployment that cares about topology. I don't think a new controller type is needed. We may be able to specify this mapping between a specific DS version x node version in the existing DaemonSetSpec. And of course DS history is a requirement. |
Yeah this sounds like it's what we called step 2 of daemonsets updates - as kargakis notes that will be part of daemon sets itself and the hope was to address it in 1.7 |
Related: #6086 |
@piosz My github notifications go to /dev/null, so please notify me out of github if you'd like me to look at something. @smarterclayton Is "step 2" documented somewhere? Right now, changing the node selector or other scheduling constraints in the pod template affects all pods of the DaemonSet. If we only wanted the new constraints to affect updated pods, I believe that would be a change in semantics. That may be fine, but we'd need to think it through. If we do that, I believe the original passive update policy would enable DaemonSet updates to be driven by either destructive node updates (delete and add node) or in-place node updates that also update node labels. |
Just to be clear, every update in the pod template will affect all pods.
I don't think this is any different from enabling A/B Deployments but I haven't thought out yet the implications of an updated node selector between versions.
This is possible today because the DS is reconciled on any of these events. |
Step 2 was in the daemonset upgrade proposal - we punted the bulk of the issue to 1.7 and need to amend the proposal to cover the issues |
@bgrant0607 ack |
I think that for OnDelete to be a truly passive strategy ("spin up the same pod that got killed on a node"), we need this issue fixed. |
I can imagine this being a mapping of ControllerRevisions to NodeSelectors. |
There are some examples in #23233 (add-on umbrella issue). Sometimes node-level orchestration, such as applying node labels and/or taints, is necessary in conjunction with this. The labels represent properties such as the kubelet release version, features enabled, OS-level initialization, removal of static pod manifests, etc. It might sometimes be necessary to drain nodes prior to update (which effectively happens in the case of destructive node upgrades). If a security patch or other critical fix were necessary, an administrator might need to roll out a new release of a daemon for a particular set of nodes. I do not want to create any controllers with multiple pod templates, however. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale At minimum, the pattern of using node selectors to coordinate DaemonSet upgrade with node upgrades should be documented. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Pod running as a Daemon Set is usually responsible for some operations on the node. With new releases of node components the environment for those operations can happen. Since usually DS is updated together with the master node and we are committed to support a few past versions of nodes due to backward compatibility it would be great to have a support to run different versions of the agent for different versions of the node.
For example I'd like to run fluentd in the following way:
Currently this can be done by creating 3 Daemon Sets separately which targets different node versions (though it's not obvious how do it, see #42840). This also generates additional maintenance overhead and general confusion (why do I need a DS for version 1.5.x while I don't have any nodes in this version).
The issue could be addressed by introducing an abstraction on top of DaemonSet which will manage daemon sets underneath in a similar manner that currently Deployment manages a set of ReplicaSets.
@bgrant0607 @erictune @janetkuo @kubernetes/sig-apps-feature-requests
cc @fgrzadkowski @crassirostris
The text was updated successfully, but these errors were encountered: