-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: ability to specify a min/max replica count on scaleable resources #33843
Comments
I have users that want to run Spark or a Gluster management service that have basically requested this extra level of control to set local min/max replica boundaries for their resources where they wanted to prevent operator changes to replicas. |
cc @erictune |
I am not a fan of this feature. It would add API complexity and seems cumbersome. I think there are much more important issues with the controllers that we need to tackle if we're going to spend API review bandwidth. |
@bgrant0607 - my response echoed your sentiment, but I got a fair amount of push-back from three distinct user communities looking to support Spark, Gluster, and a mobile application scenario. At it's core the request is asking to have a way to prevent users from harming themselves. User's can harm themselves any number of ways, so I get the complexity/cumbersome argument may not be worth it for all workloads. Any objection to the concept via annotations on the objects in question, and enforcement via some optional admission controller? If it gets broad adoption, then we could revisit when more bandwidth is needed? |
We are running into a number of places where the dev/person who wrote the app has these restrictions, but the operator doesn't likely know. It looks to me like a dev/ops split. Allow dev to keep ops from doing things to hurt themselves. |
I think @eparis nailed the interaction exactly. The application writers are often the ones applying/suggesting these limitations for the deployer/operator. It should generally be possible for the operator to override (at a config level) separate from the scaling operation itself. |
It's usually not enough to do min/max. We'd need to do cut outs - an etcd On Fri, Sep 30, 2016 at 2:16 PM, Dan McPherson notifications@github.com
|
ack: enforcing odd numbers is a reasonable request |
I share some of Brian's concerns as I've mentioned before. I think this This most frequently comes up for things that are "stateful" - I can only On Fri, Sep 30, 2016 at 9:52 PM, Derek Carr notifications@github.com
|
cc @erictune |
It's not enough to do odd numbers though - 3,5,7 may be valid but
others are not. A few systems require 3n+1. Also, does zero count,
or is it a potential exception?
I think this is a requirement to be leveraged against petsets, not
against general resources, and should be solved there, not across all
scalable resources.
|
Also, enforcing an annotation would break autoscaling, and autoscaling
|
A much simpler option might be a "scale protection", like the "delete On Tue, Oct 11, 2016 at 7:25 PM, Clayton Coleman ccoleman@redhat.com
|
This needs to be triaged as a release-blocker or not for 1.5 @smarterclayton @derekwaynecarr |
not blocker. |
thanks @eparis |
@derekwaynecarr Is it appropriate to move this to the next milestone or clear the 1.5 milestone? (and remove the non-release-blocker tag as well) |
Moving to 1.7 as late to happen in 1.6. Feel free to switch back if this is incorrect. |
Any progress on the scale selector ? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
This kind of policy should be implemented outside the workload controllers. |
Users have requested the ability to set local min/max replica constraints for any resource that is a target for manual scaling (i.e. Deployment / ReplicaSet / ReplicationController / Job / PetSet).
The idea is that each spec for the related resources would have the following (similar to HPA):
If a user submitted a
kubectl scale
command that causedReplicas
to fall out of the configured boundary, the request would fail validation, and not proceed.The example use case is as follows:
Keeping a min/max local to the resource may make the most sense, and is easily enforceable via validation. I had debated a pattern using
LimitRange
but that has the drawback that the scope covered is too large. While I am interested in adding label selectors to LimitRange to let you scope constraints to particular classes of things better, I think in this case, it may make the most sense to have min/max replicas on the local resource that is being constrained most closely.Thoughts?
/cc @smarterclayton @eparis @bgrant0607 @lavalamp
The text was updated successfully, but these errors were encountered: