-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a DeploymentDescriber to kubectl #14199
Adding a DeploymentDescriber to kubectl #14199
Conversation
Labelling this PR as size/M |
GCE e2e build/test passed for commit 382c2b51d15e961bf5aba2c2ba2624f902f78405. |
fmt.Fprintf(out, "StrategyType:\t%s\n", d.Spec.Strategy.Type) | ||
if d.Spec.Strategy.RollingUpdate != nil { | ||
ru := d.Spec.Strategy.RollingUpdate | ||
fmt.Fprintf(out, "RollingUpdateStrategy:\t%s maxUnavailable, %s maxSurge, %d minReadySeconds", ru.MaxUnavailable.String(), ru.MaxSurge.String(), ru.MinReadySeconds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like describe to a bit more human-friendly, so I would suggest "max unavailable", "max surge", etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
In the console output, the values all line up, right? |
LGTM, but maybe someone else from @kubernetes/kubectl wants to take a look. |
} | ||
|
||
func (dd *DeploymentDescriber) Describe(namespace, name string) (string, error) { | ||
d, err := dd.Experimental().Deployments(namespace).Get(name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should get the replication controllers managed by this dep and show them in some order. If they can be loaded we shouldn't hard fail (show error and continue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we really want them, then instead of recomputing them here, I would prefer that deployment controller adds them to the DeploymentStatus and we just display it here.
I think we had discussed doing that in the proposal, but then decided against it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would agree with @smarterclayton here. Keeping copies of other objects' state in the Deployment object's status will either be frequently out of date or will lead to lots of extraneous updates to apiserver. At the very most, perhaps the matching RC names should be in the status (and, during a deployment, both the old RC's and the new RC's).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Thats the reason why we decided not to store anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comparison I'm drawing is to the pod describer where we fetch the replication controllers that overlap the pod. I think users would expect the same here - I'm ok with showing status, but it is common to want to know whether you have multiple RCs in flight during a deployment. Getting the list of RCs and even showing what the D controller thinks is the status is both good. The note about not failing was just that if the RC is borked for some reason we don't want to prevent describe from working.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
382c2b5
to
f4dcd38
Compare
Yes, values line up in the console output. |
GCE e2e build/test passed for commit f4dcd38dabdd2ba1bf9d9e6a7a40bac928655098. |
f4dcd38
to
62a8d42
Compare
Unit, integration and GCE e2e build/test failed for commit 62a8d4217d9b2549b8bdbe60404762ad83b354f7. |
b0ea1c5
to
da05778
Compare
Unit, integration and GCE e2e build/test failed for commit da0577870b65f23d36787cdca85cf10e22c363fd. |
Labelling this PR as size/L |
39c256c
to
38e5dca
Compare
Have updated the code to include old and new RC in output. PTAL. |
Unit, integration and GCE e2e build/test failed for commit 9353594c16c887f4b5c4a845456ea785ddb16224. |
Unit, integration and GCE e2e build/test failed for commit 39c256ccdcb9908adaa30e7aefedd3154f9e3c6e. |
Closing and reopening to retrigger Jenkins. It has been pending for 20 hours. |
Adding LGTM label as per @ironcladlou's LGTM above. |
@k8s-bot test this please |
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
@k8s-bot ok to test |
Unit, integration and GCE e2e test build/test passed for commit 153c57f. |
@k8s-bot ok to test |
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
Logs show all tests passed. |
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
|
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
|
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
|
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
@k8s-bot ok to test |
Unit, integration and GCE e2e build/test failed for commit 153c57f. |
Unit, integration and GCE e2e test build/test passed for commit 153c57f. |
Adding a DeploymentDescriber to kubectl
…#14199-upstream-release-1.1 Automated cherry pick of #14199 upstream release 1.1
…y-pick-of-#14199-upstream-release-1.1 Automated cherry pick of kubernetes#14199 upstream release 1.1
…y-pick-of-#14199-upstream-release-1.1 Automated cherry pick of kubernetes#14199 upstream release 1.1
Ref #1743
Adding a DeploymentDescriber to describe deployment objects using
kubectl describe deployments
Sample output: