-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server side defaulting of values prevents diff patches from working #34292
Comments
I believe we have discussed this on this issue #26202 |
Did you mean, "not easily differentiated"? |
My suggested workflow is something like:
Then, at apply time, we can compute some series like this:
You need to come up with a test suite to make sure the procedure you implement handles all the cases. I'm not convinced the above does.
|
@pwittrock @lavalamp Before we do anything about this, we need a design proposal PR. Just handling defaults is not sufficient. See #1178 and #17333, the apply design description in #15894, and https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#late-initialization. We should enumerate the problematic defaults. Label selector defaulting is obviously one. We may want to reconsider both the current defaulting approach (#14961) and allowing updates (#26202). We've also discussed adding discriminators for all oneof/unions (e.g., in #6979). That would enable us to clear all other fields when the discriminator changed. For backward compatibility, we'd also set the discriminator value from the one set field (and continue to return an error in the case of ambiguity). Representing these unions using OpenAPI extensions would also facilitate better validation and diff'ing. If there are other cases of multiple mutable fields set from the same value, we should consider what to do about them on a case-by-case basis. Some may have been introduced due to lack of fine-grain API versioning (#34508). That said, we should support dry run in the apiserver, though I wasn't thinking of using it for this purpose (#12143 more generally). In the meantime, |
I will draft a proposal that we will use a similar approach like
I will draft another proposal for using discriminators. I guess it should be use in version like |
Thanks @ymqytw! This is really nice! I'll take a look asap (probably this weekend). Sorry I was very busy those last couple of weeks. |
Ref #24198 |
Also related: #25424 |
/sig api-machinery |
Automatic merge from submit-queue (batch tested with PRs 50919, 51410, 50099, 51300, 50296) Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy. With the new value in `patchStrategy`, the patch will include an optional directive that will tell the apiserver to clear defaulted fields and update. This will resolve issue like #34292 (comment) and similar issue caused by defaulting in volume. The change is [backward compatible](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md#version-skew). The proposal for this new patch strategy is in https://github.com/kubernetes/community/blob/master/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md The implementation to support the new patch strategy's logic is in #44597 and has been merged in 1.7. ```release-note Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy. ``` /assign @apelisse /assign @janetkuo for deployment change /assign @saad-ali for volume change
Automatic merge from submit-queue (batch tested with PRs 50919, 51410, 50099, 51300, 50296) Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy. With the new value in `patchStrategy`, the patch will include an optional directive that will tell the apiserver to clear defaulted fields and update. This will resolve issue like kubernetes/kubernetes#34292 (comment) and similar issue caused by defaulting in volume. The change is [backward compatible](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md#version-skew). The proposal for this new patch strategy is in https://github.com/kubernetes/community/blob/master/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md The implementation to support the new patch strategy's logic is in #44597 and has been merged in 1.7. ```release-note Add `retainKeys` to patchStrategy for v1 Volumes and extentions/v1beta1 DeploymentStrategy. ``` /assign @apelisse /assign @janetkuo for deployment change /assign @saad-ali for volume change
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Another apply-related issue |
/wg apply |
Server side defaults are easily differentiated from other values set within the kubernetes system (e.g. DeploymentStatus).
kubectl apply
records the applied config as an annotation on the objects that it creates so that it can perform a 3-way diff to merge local changes into the live config without erasing other items such as the status.Steps to reproduce:
Use
kubectl apply -f
to create a deployment using this config:Update the config spec adding
Apply the new config.
Result:
This is caused because the server defaults spec.strategy.rollingUpdate but this is invisible to the client and cannot be differentiated from other changes made by the server that the client should ignore (e.g. status changes).
This can be seen for any field whose default is derived from another field such as Deployment labels and selectors (derived from the PodTemplate labels).
Solving this will require at least some changes on the server side. One possible solution would be to:
Thoughts / Comments / Suggestions?
The text was updated successfully, but these errors were encountered: