-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment Undo doesn't undo number of replicas #25236
Comments
Related? #22512 |
/cc @janetkuo |
@aronchick is this a dupe of #25238? |
This is working as intended. Scaling is orthogonal to deployment. What if the user set up HPA? Should HPA's changes be undone? Scale is set/changed at least 3 different ways above (create, scale, apply). In that scenario, there's no way we could infer user intent regarding whether they expected scale to be undone or not. Note that the Deployment API doesn't really know the difference between apply and other commands, and apply doesn't itself cause Deployment versioning to happen. It's changes in the pod template spec that do, and scaling doesn't change the pod template spec. @janetkuo is this documented in undo? |
Yes, it is in http://kubernetes.io/docs/user-guide/deployments/#rolling-back-a-deployment:
|
I'll make it more obvious, considering most people would skim the doc. |
Updated docs kubernetes/website#484 |
Closing since doc is updated |
I didn't intend to change both # of replicas & image; i overlooked the fact that the # of replicas was different when I rolled out the new image. I have a new template that has a number of replicas in it. When I applied that new template, it used that new number in it. It still feels, for better or worse, that ANY changes I made should be "undone" when I roll back given what I'm doing. Documentation will not address this - years of hitting cmd-Z on my laptop tells me that whatever I just did is now rolled back, that's what this felt like. Alternative may be if you're doing a roll out that changes multiple things, maybe we highlight that? $ kubectl apply -f new_ds.yml Doesn't solve my problem entirely - since when I undo/rollback, I still would have expected 7 to go back to 5 and bar to go back to foo. |
We could make it configurable at the very least. Something like |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Working as intended |
Execute the following:
This results in 4 replicas running, instead of the expected 10 (since the deployment was undone).
The text was updated successfully, but these errors were encountered: