-
Notifications
You must be signed in to change notification settings - Fork 925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fall back from PATCH to PUT on apply
and edit
#1351
Comments
@thockin: This issue is currently awaiting triage. SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@thockin What do you have in mind here? Would apiserver be able to tell us that we need to retry with PUT? And what would that look like? /triage needs-information |
In theory, we could add it to something like the OpenAPI, so that you could look at a patch and say "oh, this won't work, let's fall back on PUT" or even just throw an error like "apply uses PATCH and this operation modifies a field which doesn't like to be patched - consider Before we go designing that API machinery, I wanted to see if I was just being dumb and there's some reason we would not be able to do it. |
apply
and edit
apply
and edit
How will this interact with our (potential? confirmed?) plan to deprecate and remove client-side apply? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
Do we think that this is something that would cause issues for downstream users? I agree that this could be useful for people that interact directly with kubectl but I'm thinking of automated systems that may expect a failure due to expected client side behavior. I can imagine some solutions to get around this (kuberc) but just want to gauge if people think this would be a real problem or if we could just shove it directly into kubectl. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
I think this is like introducing a |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
There's an ancient issue with Service ports having an insufficient merge key for client-side apply. This causes a lot of pain for people who try to
kubectl apply
orkubectl edit
a Service.If the apiserver gave you a clue like "this field doesn't like to be PATCHed" - could kubectl fall back on PUT?
The text was updated successfully, but these errors were encountered: