-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl: implement kubectl apply --prune #33075
Conversation
Awesome! Thanks, @mikedanese. |
9745ed6
to
a279a04
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few nits on UX and readability...
cmd.MarkFlagRequired("filename") | ||
cmd.Flags().Bool("overwrite", true, "Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration") | ||
cmd.Flags().BoolVar(&options.Prune, "prune", false, "Automatically delete missing objects from supplied config") | ||
cmd.Flags().BoolVar(&options.Cascade, "cascade", true, "Only relevant during a prune. If true, cascade the deletion of the resources managed by pruned resources (e.g. Pods created by a ReplicationController).") | ||
cmd.Flags().IntVar(&options.GracePeriod, "grace-period", -1, "Period of time in seconds given to pruned resources to terminate gracefully. Ignored if negative.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be more user-friendly to say "Wait forever if negative."? (Given I even understood this correctly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually defaulted by the apiserver, not wait forever but I can say something to that effect
cmd.MarkFlagRequired("filename") | ||
cmd.Flags().Bool("overwrite", true, "Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration") | ||
cmd.Flags().BoolVar(&options.Prune, "prune", false, "Automatically delete missing objects from supplied config") | ||
cmd.Flags().BoolVar(&options.Cascade, "cascade", true, "Only relevant during a prune. If true, cascade the deletion of the resources managed by pruned resources (e.g. Pods created by a ReplicationController).") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a user may need to use this? May be --no-cascaded-prune
would be a better name for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--cascade is already used in multiple other commands in kubectl. I think continuity of experience outweighs a nice descriptive flag.
When a user may need to use this?
This is the default behavior. When we prime a deployment, we should also prune it's replicasets or they will become orphaned and unmanaged.
@@ -538,6 +538,41 @@ func (b *Builder) visitorResult() *Result { | |||
b.selector = labels.Everything() | |||
} | |||
|
|||
// visit items specified by paths |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this code simply moved from down below, or it also contains some changes? May be good idea to make a separate PR for moving this first. Also, this looks like a ginormous func and some parts of it can be broken out,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return nil | ||
} | ||
|
||
selector, err := labels.Parse(options.Selector) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like return visitAndPruneIfNeeded(options.Selector)
could be easier to read, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 for separate function, but I'd rather have it return err and check it here, than what you propose.
return nil | ||
} | ||
|
||
type pruner struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me it'd be perfectly reasonable to put have this in a separate file, when I was trying to read apply.go
last time, it seemed rather big and not quite structured.
@kubernetes/kubectl |
@fabianofranz fyi |
@@ -158,6 +188,12 @@ func RunApply(f *cmdutil.Factory, cmd *cobra.Command, out io.Writer, options *re | |||
if err := createAndRefresh(info); err != nil { | |||
return cmdutil.AddSourceToErr("creating", info.Source, err) | |||
} | |||
if uid, err := info.Mapping.UID(info.Object); err != nil { | |||
glog.Infof("here") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
@@ -183,6 +219,12 @@ func RunApply(f *cmdutil.Factory, cmd *cobra.Command, out io.Writer, options *re | |||
} | |||
} | |||
|
|||
if uid, err := info.Mapping.UID(info.Object); err != nil { | |||
glog.Infof("here") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
for _, obj := range objs { | ||
uid, err := mapping.UID(obj) | ||
if err != nil { | ||
glog.Infof("here") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
name, err := mapping.Name(obj) | ||
if err != nil { | ||
glog.Infof("here") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
53498d9
to
05d68d2
Compare
It seems like the
Or should the proper way to do it be to utilize the labels? |
@MrHohn can you show me the config you used? You likely want a label selector that only matches the top level objects you want to manage. e.g.
And add k8s-managed-addon=true to the labels of the top level objects you want to manage but not to the pods. |
Sure. Config as below:
|
I see now. But why not making the label mandatory for --prune? As if label is not set it will prune innocence resources? |
Empty label selector is valid (matches all) which is the default. I think we should add a flag |
if err := p.prune(api.NamespaceNone, m, shortOutput); err != nil { | ||
return fmt.Errorf("error pruning objects: %v", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is using visitedRESTMappings
here enough to cover all cases?
Given a YAML file which has a ReplicationController before, and now it is replaced with a Deployment. If we need to prune the RC, I think this logic will fail? Because resource kind RC will not be visited this time -> RCs will not be listed by pruner -> fail to prune existing RC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I think we might need to add a --types flag to explicitly override visitedRESTMappings.
General comment - this will result in the canonical result of apply to no longer live in source control since the label selectors define what will be deleted. Seems like this is very error prone - running apply on a subdirectory with prune will cause it to delete all sorts of things. At a minimum I think we need to print out what will be deleted and force the user to confirm. Review status: 0 of 3 files reviewed at latest revision, 14 unresolved discussions, some commit checks failed. pkg/kubectl/cmd/apply.go, line 94 at r4 (raw file):
Help message is a bit confusing - I read it as it modifies the configs to remove objects that are not found in the sever. Maybe "Automatically delete resource objects that do not appear in the configs." pkg/kubectl/cmd/apply.go, line 95 at r4 (raw file):
Ug. Does the garbage collector handle this for us yet? pkg/kubectl/cmd/apply.go, line 98 at r4 (raw file):
What does an empty value mean? Delete everything by default? We probably shouldn't permit using the default in that case pkg/kubectl/cmd/apply.go, line 318 at r4 (raw file):
Consider pulling this whole logic into its own function - e.g. 'Delete' and it figures out to delete the thing Comments from Reviewable |
Review status: 0 of 3 files reviewed at latest revision, 15 unresolved discussions, some commit checks failed. pkg/kubectl/resource/visitor.go, line 644 at r4 (raw file):
Comments for all this stuff to say what it does Comments from Reviewable |
@bgrant0607 @ghodss can you respond to @pwittrock 's above comment? |
05d68d2
to
8e8f5ff
Compare
Jenkins unit/integration failed for commit a344d1d. Full PR test history. The magic incantation to run this job again is |
But is there a way to do so if user does want to prune those resources? Again take the Addon Manager as an use case. After upgrade the Addon Manager may try to use |
a344d1d
to
2eebb30
Compare
2eebb30
to
62960aa
Compare
JSON and YAML formats are accepted.`) | ||
JSON and YAML formats are accepted. | ||
|
||
Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
We should add a test that makes sure when --prune is not supplied, that pruning doesn't actually happen. Also would be good to test that --all is actually required when no label selectors are used. |
Reviewed 3 of 3 files at r6, 1 of 4 files at r9, 3 of 3 files at r10. Comments from Reviewable |
Reviewed 1 of 3 files at r1, 1 of 2 files at r5, 3 of 3 files at r11. Comments from Reviewable |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
There are a few outstanding review comments that need to be fixed in followups. |
@MrHohn Deletion of resources that were never managed by kubectl apply is not the responsibility of kubectl apply. The addon manager will need to deal with that during the transition from the previous approach to the new approach. |
@bgrant0607 Thanks for reply. Yes, in theory I'm now thinking about faking the |
Automatic merge from submit-queue Upgrade addon-manager with kubectl apply The first step of #33698. Use `kubectl apply` to replace addon-manager's previous logic. The most important issue this PR is targeting is the upgrade from 1.4 to 1.5. Procedure as below: 1. Precondition: After the master is upgraded, new addon-manager starts and all the old resources on nodes are running normally. 2. Annotate the old ReplicationController resources with kubectl.kubernetes.io/last-applied-configuration="" 3. Call `kubectl apply --prune=false` on addons folder to create new addons, including the new Deployments. 4. Wait for one minute for new addons to be spinned up. 5. Enter the periodical loop of `kubectl apply --prune=true`. The old RCs will be pruned at the first call. Procedure of a normal startup: 1. Addon-manager starts and no addon resources are running. 2. Annotate nothing. 3. Call `kubectl apply --prune=false` to create all new addons. 4. No need to explain the remain. Remained Issues: - Need to add `--type` flag to `kubectl apply --prune`, mentioned [here](#33075 (comment)). - This addon manager is not working properly with the current Deployment heapster, which runs [addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer) in the same pod and changes resource limit configuration through the apiserver. `kubectl apply` fights with the addon-resizers. May be we should remove the initial resource limit field in the configuration file for this specific Deployment as we removed the replica count. @mikedanese @thockin @bprashanth --- Below are some logical things that may need to be clarified, feel free to **OMIT** them as they are too verbose: - For upgrade, the old RCs will not fight with the new Deployments during the overlap period even if they use the same label in template: - Deployment will not recognize the old pods because it need to match an additional "pod-template-hash" label. - ReplicationController will not manage the new pods (created by deployment) because the [`controllerRef`](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/controller-ref.md) feature. - As we are moving all addons to Deployment, all old RCs would be removed. Attach empty annotation to RCs is only for letting `kubectl apply --prune` to recognize them, the content does not matter. - We might need to also annotate other resource types if we plan to upgrade them in 1.5 release: - They don't need to be attached this fake annotation if they remain in the same name. `kubectl apply` can recognize them by name/type/namespace. - In the other case, attaching empty annotations to them will still work. As the plan is to use label selector for annotate, some innocence old resources may also be attached empty annotations, they work as below two cases: - Resources that need to be bumped up to a newer version (mainly due to some significant update --- change disallowed fields --- that could not be managed by the update feature of `kubectl apply`) are good to go with this fake annotation, as old resources will be deleted and new sources will be created. The content in annotation does not matter. - Resources that need to stay inside the management of `kubectl apply` is also good to go. As `kubectl apply` will [generate a 3-way merge patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strategicpatch/patch.go#L1202-L1226). This empty annotation is harmless enough.
I don't understand this - apply --prune should always only delete objects with the last-applied-configuration annotation, right? So how would it delete things that it shouldn't? |
Correct me if I am wrong on either of these: Case 1: Applying a sub-directory accidentally IIUC this will delete everything from the root config dir except the things in this sub dir cd /root/config/dir
kubectl apply -f ./
# Sometime later...
cd /root/config/dir/sub
kubectl apply -f . Case 2: Applying 2 different directories in the same namespace IIUC this will delete everything owned by team 1
Am I miss understanding the results of these actions? |
@pwittrock I assume that you meant those commands to specify You are not wrong. However: Your first case is just user error. I'd expect different teams to use different namespaces. That was the original motivation for namespaces. There are many ways users can step on each other when multiple teams share the same namespace. I'd recommend users use prune at the namespace level, without a label selector. However, I'd like all commands to support filtering via label selector, for additional control. Additionally, this is not the only case of kubectl commands depending on command-line arguments. All of the generators do, as well: run, create secret, create configmap, etc. Users have asked for declarative (apply-compatible) ways of specifying such commands, despite the fact that it's easy to work around in most cases, such as using a script, make, nulecule, or some other tool. We'll need to decide whether to support that. I added a bullet in the roadmap about this a while ago. Perhaps the Kubefiles idea could be extended to support this if it looks compelling enough. |
Yup I meant |
The consequences of the first user error could be quite severe. IMHO it is plausible that a user executes an apply command thinking they are in one directory when they are in fact in another directory. I think it would be pretty simple to require a safeguard Example workflow: kubectl apply init /root/config directory --config-annotation "project=myproject"
# Do some work
kubectl apply -f /root/config directory
# All resources created with annotation "project=myproject"
# Delete some resources...
kubectl apply --prune -f /root/config directory
# All resources with the "project=myproject" annotation are deleted. No messy label selectors need to be specified. |
Closes #19805
Depends on #32599
@errordeveloper @lukemarsden
This change is