Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods Not Scaling Up with HPA Despite CPU Utilization Exceeding Target #127526

Open
orenstartio opened this issue Sep 21, 2024 · 5 comments
Open
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.

Comments

@orenstartio
Copy link

What happened?

I'm using the HPA to scale my pods based on CPU usage. I've set the target CPU utilization at 50%, with a stabilization window of 0. However, my pods are not scaling up, even though the current CPU usage consistently exceeds 50%. This issue persists for over 30 minutes without any scaling activity.

What did you expect to happen?

The HPA will trigger scale up immediately

How can we reproduce it (as minimally and precisely as possible)?

Create an OKE cluster and follow this guide

Anything else we need to know?

No response

Kubernetes version

Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"b6943c3c67cd1e3b8a1269566e755e899ed25ce2", GitTreeState:"clean", BuildDate:"2023-06-23T15:16:54Z", GoVersion:"go1.20.4 4493 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.12", GitCommit:"19d5e4ee03daf5e8fb55a88b1a52d94332435e7e", GitTreeState:"clean", BuildDate:"2023-07-26T10:00:24Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

Oracle OKE

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@orenstartio orenstartio added the kind/bug Categorizes issue or PR as related to a bug. label Sep 21, 2024
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 21, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@orenstartio
Copy link
Author

/sig autoscaling

@k8s-ci-robot k8s-ci-robot added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 22, 2024
@orenstartio
Copy link
Author

image (8)

@RRethy
Copy link

RRethy commented Sep 27, 2024

You are probably within the tolerance value so it won't scale, see --horizontal-pod-autoscaler-tolerance in https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/.

@saarthdeshpande
Copy link

Can you upload a screenshot for the output of kubectl get hpa -w?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

4 participants