Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubectl reported a Deployment scaled where as replicas are unavailable #55369

Open
arun-gupta opened this issue Nov 9, 2017 · 15 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@arun-gupta
Copy link
Contributor

/kind bug
/sig api-machinery

What happened:

kubectl reported a Deployment scaled even when the replicas were unavailable.

What you expected to happen:

All replicas in the Deployment were unavailable and even then the command output showed that Deployment scaled.

How to reproduce it (as minimally and precisely as possible):

Create a Deployment using the configuration file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment 
spec:
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx 
    spec:
      containers:
      - name: nginx 
        image: nginx:1.12.1
        ports: 
        - containerPort: 80
        - containerPort: 443

Create a ResourceQuota using the configuration file:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: quota
spec:
  hard:
    cpu: "10"
    memory: 6Gi
    pods: "10"
    replicationcontrollers: "3"
    services: "5"
    configmaps: "5"

Describe the quota:

$ kubectl describe quota/quota
Name:                   quota
Namespace:              default
Resource                Used  Hard
--------                ----  ----
configmaps              0     5
cpu                     300m  10
memory                  0     6Gi
pods                    3     10
replicationcontrollers  0     3
services                1     5

Scale the deployment:

$ kubectl scale --replicas=12 deployment/nginx-deployment
deployment "nginx-deployment" scaled

The output indicates that the deployment was scaled.

Getting more details about Deployment shows that 9 replicas are unavailable:

$ kubectl describe deployment/nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Wed, 08 Nov 2017 15:25:03 -0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas"...
Selector:               app=nginx
Replicas:               12 desired | 3 updated | 3 total | 3 available | 9 unavailable

Is this the expected behavior?

Also, no reason is given as to why the replicas did not scale.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T21:07:53Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:

AWS

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Nov 9, 2017
@mbohlool mbohlool added sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Nov 9, 2017
@arun-gupta
Copy link
Contributor Author

/sig node

@k8s-ci-robot k8s-ci-robot added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Nov 9, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2018
@bgrant0607 bgrant0607 added kind/feature Categorizes issue or PR as related to a new feature. and removed sig/node Categorizes an issue or PR as relevant to SIG Node. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2018
@bgrant0607
Copy link
Member

While perhaps confusing, this is working as intended. Everything in Kubernetes is asynchronous. I assume you're asking for a way to wait, synchronously, for the pods to become ready. Or would just changing the terminology (is scaling?) be sufficient?

See also:
#1899
#34363

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2018
@bgrant0607 bgrant0607 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 2, 2018
@mehabhalodiya
Copy link
Contributor

@arun-gupta Does it need to work on? Please let me know, I want to try to resolve it.
Thank you.

@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Apps Sep 29, 2023
@helayoty helayoty added this to SIG CLI Oct 2, 2023
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG CLI Oct 2, 2023
@pranav-pandey0804
Copy link

It sounds like this issue could benefit from a clearer distinction in kubectl between scaling the Deployment object and verifying the availability of the replicas.

@pranav-pandey0804
Copy link

Given that scaling is asynchronous, as discussed in #1899, the command may show "scaled" as soon as the scale request is processed, even if the actual pods aren't up yet.

@pranav-pandey0804
Copy link

introducing an optional flag to kubectl scale for synchronous behavior, similar to kubectl wait, could help users verify that scaling is actually complete.

@pranav-pandey0804
Copy link

It also highlights the challenge of tracking the "ready" state in Kubernetes API calls, as also mentioned in #34363.

@pranav-pandey0804
Copy link

Since scaling and availability are handled by separate controllers, it might be useful if kubectl could show a "scaling in progress" status when not all replicas are ready.
Adding a --wait-for-availability flag or similar functionality might provide a better user experience, aligning with the goals discussed in #1899 for more automation-friendly output in kubectl.

@pranav-pandey0804
Copy link

It's pretty interesting how this issue touches on the core of Kubernetes' asynchronous nature.
When kubectl says the deployment has scaled, it simply means the request was accepted—not necessarily that all pods are up and running.

@pranav-pandey0804
Copy link

pranav-pandey0804 commented Nov 3, 2024

This may leave users high and dry, especially when they expect instant results.
Maybe a practical improvement could be if kubectl scale offered more detailed status updates, like highlighting which replicas are still pending.

@pranav-pandey0804
Copy link

For now, using a combination of kubectl scale and kubectl wait as a workaround could help, but I agree that a built-in feature would make life a lot easier, echoing some ideas raised in #34363.

@pranav-pandey0804
Copy link

Building on this, it does feel like kubectl scale could really benefit from clearer communication about pod readiness. The current setup can be a bit misleading for folks who assume scaling immediately equals availability.

@pranav-pandey0804
Copy link

Drawing inspiration from the discussions in #1899,
maybe adding an option to kubectl scale that provides real-time updates on pod status would make the process more intuitive.
It’s all about making sure users have visibility into what’s happening under the hood, without having to piece it together themselves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Status: Needs Triage
Status: Needs Triage
Development

No branches or pull requests

7 participants