-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workload API v1 requirements umbrella issue #42752
Comments
/cc |
Thanks for collecting this, Brian. I would also suggest having an umbrella issue for DaemonSet as a feature under https://github.com/kubernetes/features/issues. |
While I had thought of DaemonSet as an admin API in the past, and had suggested a different API group (#8190), my current thinking is that it would be simpler for users for it to be in the apps group, as part of the continuum of workload APIs. |
I put all the workload APIs in the "application layer" (join kubernetes-dev to access): |
Feature issues: |
I think of DaemonSets as apps that tend to be used in one particular
context, so I'm +1 to it being apps.
On Mar 9, 2017, at 8:19 PM, Brian Grant <notifications@github.com> wrote:
Feature issues:
DaemonSet updates: kubernetes/enhancements#124
<kubernetes/enhancements#124>
StatefulSet updates: kubernetes/enhancements#188
<kubernetes/enhancements#188>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#42752 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pyBT8oh8HTAyXtpEUUaa8jitaHxfks5rkKURgaJpZM4MXSbP>
.
|
In my mind an administrative application is still an application so +1 for
apps
On Fri, Mar 10, 2017 at 4:15 PM, Clayton Coleman <notifications@github.com>
wrote:
… I think of DaemonSets as apps that tend to be used in one particular
context, so I'm +1 to it being apps.
On Mar 9, 2017, at 8:19 PM, Brian Grant ***@***.***> wrote:
Feature issues:
DaemonSet updates: kubernetes/enhancements#124
<kubernetes/enhancements#124>
StatefulSet updates: kubernetes/enhancements#188
<kubernetes/enhancements#188>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/kubernetes/kubernetes/issues/
42752#issuecomment-285543797>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_
pyBT8oh8HTAyXtpEUUaa8jitaHxfks5rkKURgaJpZM4MXSbP>
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#42752 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf7b17e-qWbODxRTDZ9Baz4EOPNVGks5rkWkHgaJpZM4MXSbP>
.
|
👍 for apps, besides cluster administrator can always restrict access to them through RBAC, if needed. |
I fleshed out the issue description. PTAL. It's especially missing StatefulSet issues. |
We should also document some more of the common vision and conventions behind the workload APIs, to inform the subsequent roadmap. I put some things in the system overview: But we should document additional details, such as:
|
MinReadySeconds proposal is kubernetes/community#478 |
/cc |
cc @kubernetes/sig-apps-proposals |
Are workload APIs still on target for GA in 1.9? Are there any blockers? |
@rphillips 1.9 is still the plan |
This was fixed in v1.9! Thank you all, great teamwork, mainly driven by SIG Apps 👍! |
I'd like all of the in-flight workload APIs (ReplicaSet, Deployment, StatefulSet, DaemonSet) to advance to v1 together as a group, sometime this year.
Related issues:
I'd like to see at least:
Server-side cascading deletion (GC) by default (e.g., Deployment in an inconsistent after kubectl delete and ctrl + C #23252, Deleting the deployment from UI doesn't delete the replica sets and pods #40014)Controller ref (Implement controllerRef #24946) to provide mutual exclusion even across controller types, but also ensure that orphaning and adoption work as expectedUpgrades/rollouts for DaemonSet and StatefulSet (Pet set upgrades #28706)kubectl apply deployment -f
doesn't accept label/selector changes #26202,kubectl replace deployment
doesn't remove ReplicaSet from the old deployment #24888, Allow Users to change labels using deployment #14894, ...)Decide whether to move forward with templateGeneration (Refine the Deployment proposal and move away from hashing community#384) or with the new hash (Refine the Deployment proposal and switch hashing algorithm community#477, Error syncing deployment, replica set already exists #29735)Decide whether we're going to try to unify storage for ReplicationController and ReplicaSetDaemonSet and StatefulSet history and rollbackScale subresource for all controllers for which it makes senseDeployment overlap annotation cleanup (Clean up Deployment overlap annotation code #43322)Decide whether/which annotations should be propagated by updates (Allowkubectl annotate
to annotate nested template objects #37666)Bugs:
Scaled-down Deployments don't identify old ReplicaSets (Scaled down deployments cannot identify old replica sets #42570)Maybe:
Disallow activeDeadlineSeconds (Disallow setting activeDeadlineSeconds for restartAlways pods/controllers #38684)Desirable, but can probably be done later:
Implement observedGeneration (Create per-object sequence number and report last value seen in status of each object #7328)Definitely should not block v1/GA:
kubectl rollout
for overlapping deployments #43321, Constraint solver to determine overlapping label selectors #19830, Validate no replicationController overlap. #2210)TBD:
We also should triage related issues that have been filed, though I did a quick triage through deployment and daemonset issues.
cc @erictune @janetkuo @kow3ns @Kargakis @foxish @enisoc @smarterclayton
The text was updated successfully, but these errors were encountered: