-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubectl reported a Deployment scaled where as replicas are unavailable #55369
Comments
/sig node |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@arun-gupta Does it need to work on? Please let me know, I want to try to resolve it. |
It sounds like this issue could benefit from a clearer distinction in |
Given that scaling is asynchronous, as discussed in #1899, the command may show "scaled" as soon as the scale request is processed, even if the actual pods aren't up yet. |
introducing an optional flag to |
It also highlights the challenge of tracking the "ready" state in Kubernetes API calls, as also mentioned in #34363. |
Since scaling and availability are handled by separate controllers, it might be useful if |
It's pretty interesting how this issue touches on the core of Kubernetes' asynchronous nature. |
This may leave users high and dry, especially when they expect instant results. |
For now, using a combination of |
Building on this, it does feel like |
Drawing inspiration from the discussions in #1899, |
/kind bug
/sig api-machinery
What happened:
kubectl reported a Deployment scaled even when the replicas were unavailable.
What you expected to happen:
All replicas in the Deployment were unavailable and even then the command output showed that Deployment scaled.
How to reproduce it (as minimally and precisely as possible):
Create a Deployment using the configuration file:
Create a ResourceQuota using the configuration file:
Describe the quota:
Scale the deployment:
The output indicates that the deployment was scaled.
Getting more details about Deployment shows that 9 replicas are unavailable:
Is this the expected behavior?
Also, no reason is given as to why the replicas did not scale.
Anything else we need to know?:
Environment:
kubectl version
):$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T21:07:53Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
AWS
uname -a
):The text was updated successfully, but these errors were encountered: