Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider publicly exposing some or all RollingUpdater annotations #7851

Closed
ironcladlou opened this issue May 6, 2015 · 4 comments
Closed
Labels
area/app-lifecycle area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@ironcladlou
Copy link
Contributor

The RollingUpdater makes use of some annotations to drive behavior:

kubectl.kubernetes.io/desired-replicas
kubectl.kubernetes.io/update-source-id
kubectl.kubernetes.io/next-controller-id

In OpenShift deployments, we found it useful to reuse one of these annotations for scale factor preservation between deployments. We chose the private upstream annotation because it not only provided the right semantics, but as a bonus also lets our deployments (ReplicationControllers) be more compatible with RollingUpdater.

Does it make any sense to expose desired-replicas (and/or others) publicly for these reasons? If the intent is to try and converge the deployment systems (#1743) it might not make sense in the absence of other external demand.

/cc @bgrant0607 @brendanburns @smarterclayton @abhgupta

@mbforbes mbforbes assigned bgrant0607 and unassigned bgrant0607 May 6, 2015
@mbforbes mbforbes added priority/backlog Higher priority than priority/awaiting-more-evidence. team/cluster labels May 6, 2015
@bgrant0607 bgrant0607 added area/kubectl sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed team/cluster priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 6, 2015
@bgrant0607
Copy link
Member

What do you mean by "made public"?

Why isn't the authoritative source of the desired number of replicas always in Openshift's deployment config?

@smarterclayton
Copy link
Contributor

Because of autoscaling - we want to let the autoscaler manage a replication controller group, which means the deployment config can't be authoritative in the presence. Therefore the deployment config has to let the state of the cluster be authoritative, unless there are no other RCs present, in which it falls back to its initial value.

  1. Initial creation of a deployment, value in deployment.replicas chosen
  2. Deployment exists, 1 RC size 0 and 1 RC size 10, no autoscaler. Next deployment should have size 10
  3. Deployment exists, 1 RC size 2 and 1 RC size 8, no autoscaler. Next deployment (if you allow one to start) should have 10.
  4. Deployment exists, autoscaler present, 1 RC size 3. Autoscaler scales up RC to 4 while deployment is happening. Final RC state should be 4
  5. Deployment exists, all DCs deleted, value in deployment.replicas chosen.

----- Original Message -----

What do you mean by "made public"?

Why isn't the authoritative source of the desired number of replicas always
in Openshift's deployment config?


Reply to this email directly or view it on GitHub:
#7851 (comment)

@bgrant0607 bgrant0607 added this to the v1.0-post milestone May 13, 2015
@bgrant0607 bgrant0607 removed this from the v1.0-post milestone Jul 24, 2015
@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jul 27, 2015
@bgrant0607
Copy link
Member

Revisiting all rolling-update issues/PRs.

I would like to make the rolling-update code reusable outside kubectl, such as in Deployment (#1743), and am in favor of making kubectl logic reusable, in general (#7311).

That said, I'm not sure the current annotations are the right ones. I previously commented on this here: #2863 (comment)

One simplification we could make would be to rectify the difference between the current number of replicas and new desired number of replicas at the beginning instead of at the end. That might enable us to drop the desired-replicas annotation.

When a Deployment is present, there's the question of whether the auto-scaler should scale the RCs directly (e.g., by scaling the largest one), or whether it should scale the Deployment using a polymorphic virtual scale resource (#1629), leaving up to Deployment to propagate the change to the underlying RCs. If we took the former approach, we should consider not keeping a replica count in Deployment, or performing a bidirectional sync with the replica counts of the RCs. While the former would be really cool (like syncing contacts between a smartphone and browser), I'm leaning towards the latter, which I believe we need, anyway, for Jobs (#1624) and nominal services (#260).

cc @nikhiljindal

@ironcladlou
Copy link
Contributor Author

This can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

4 participants