Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using HPA in version autoscaling/v1 might return unexpected results in kubectl #23196

Closed
piosz opened this issue Mar 18, 2016 · 6 comments
Closed
Assignees
Labels
area/kubectl priority/backlog Higher priority than priority/awaiting-more-evidence. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Milestone

Comments

@piosz
Copy link
Member

piosz commented Mar 18, 2016

In Kubernetes 1.2 HPA was graduated to GA, new types are available in apiVersion: autoscaling/v1.The previous version, apiVersion: extensions/v1beta1, is still supported for backward compatibility. Also all Kubernetes core components in 1.2 still use objects in extensions/v1beta1.

In extensions/v1beta1 if CPUUtilization in horizontalPodAutoscalerSpec was not specified it was set to 80 by default while in autoscaling/v1 HPA object without targetCPUUtilizationPercentage specified is a valid object. Pod autoscaler controller will apply a default scaling policy in this case which is equivalent to the previous one.

Due to the way how defaults are applied (during the conversion between versions) if you create HPA object using autoscaling/v1 API without targetCPUUtilizationPercentage specified and read it using kubectl it will print default value as specified in extensions/v1beta1 despite the fact the field is empty.

This should be fixed in 1.3 by migrating kubectl to use HPA in autoscaling/v1.

ref #22431, #22449

@piosz piosz added priority/backlog Higher priority than priority/awaiting-more-evidence. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. team/control-plane labels Mar 18, 2016
@piosz piosz added this to the v1.3 milestone Mar 18, 2016
@piosz piosz self-assigned this Mar 18, 2016
@piosz
Copy link
Member Author

piosz commented Mar 18, 2016

cc @fgrzadkowski @mwielgus @kubernetes/autoscaling

@liggitt
Copy link
Member

liggitt commented Mar 18, 2016

This should be fixed in 1.3 by migrating kubectl to use HPA in autoscaling/v1.

The referenced issue (#22431) points out that lots of clients will have usability issues if they are unable to show the user what percentage the controller is acting on

from @danmcp:

Part of what makes this difficult is there could be a very large difference between limit and request (especially for CPU since overcommit is common). And because HPA is based off of request, an 80% default might be a tiny fraction of limit. Which could result in the auto scaling being super aggressive for some time period before the user has a chance to take observed utilization into account. So it seems necessary to let the user understand what the default percentage will be set to for them to have a shot at understanding what's happening with their app.

from @jwforres:

From my understanding of what changed, this now means that a client will have no idea what parameters are being used to auto-scale (when the default is used). This is a big usability problem... think about it from a common user workflow:

  1. I'm not sure what scaling parameters I want, I'll pick the default to start with
  2. Something's not right and I'm not autoscaling soon enough, I should probably adjust something
  3. I have no clue how to tune this because I have no idea what the current target is... let me make a wild guess

@DirectXMan12
Copy link
Contributor

On the other hand, defaults more complicated when multiple metrics are introduced -- now that we have alpha custom metrics, you don't want CPU autoscaling by default if you have custom metrics set. However, at the moment, custom metrics are specified via an annotation. In autoscaling/v1 this is not problem, since we can leave the CPU percentage field empty, and it will stay that way (and if we have custom metrics, we won't get any default behavior on CPU from the controller). In extensions/v1beta1, a default gets set, so we'll always look at CPU percentage in the controller, even if we have custom metrics.

If you want to put the API-level default back in, we'd either have to elevate custom metrics to first-class citizens so that we could set the default for CPU only if no other metrics are specified (or attempt to check for the annotation value in the defaulting logic, which is probably not something that we want to do), or use a sentinel value to explicitly disable CPU autoscaling.

IMO, the "CPU target percentage is a special field different from custom metrics" design that we currently have is suboptimal, but switching back to the original layout of all metrics being specified as {"name": "foo", "value": "bar"} is probably out of the question at the moment (the custom metrics angle is mainly a problem since we have no way to indicate "I don't want this" vs "I want a default value for this").

@roberthbailey
Copy link
Contributor

@piosz -- status on this issue? Will it be fixed in 1.3?

@piosz
Copy link
Member Author

piosz commented Jun 3, 2016

@roberthbailey once #26775 is merged we can close this.

@DirectXMan12 I understand your concerns here. Not setting default to 80% was required to be able to scale only based on custom metrics (otherwise it would always scale based on cpu set by default to 80% + custom metrics).

I think defining reasonable default policies and behaviors should be a part of extending HPA definition (which basically means moving forward with custom metrics feature). cc @jszczepkowski

@piosz
Copy link
Member Author

piosz commented Jun 11, 2016

kubectl uses now hpa/v1 by default so closing the issue.

@piosz piosz closed this as completed Jun 11, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl priority/backlog Higher priority than priority/awaiting-more-evidence. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

5 participants