Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stabilize map order in kubectl describe #26046

Conversation

timoreimann
Copy link
Contributor

@timoreimann timoreimann commented May 22, 2016

Refs #25251.

Add SortedResourceNames() methods to map type aliases in order to achieve stable output order for kubectl descriptors.

This affects QoS classes, resource limits, and resource requests.

A few remarks:

  1. I couldn't find map usages for described fields other than the ones mentioned above. Then again, I failed to identify those programmatically/systematically. Pointers given, I'd be happy to cover any gaps within this PR or along additional ones.
  2. It's somewhat difficult to deterministically test a function that brings reliable ordering to Go maps due to its randomizing nature. None of the possibilities I came up with (rely a "probabilistic testing" against repeatedly created maps, add complexity through additional interfaces) seemed very appealing to me, so I went with testing my sort.Interface implementation and the changed logic in kubectl.describeContainers().
  3. It's apparently not possible to implement a single function that sorts any map's keys generically in Go without producing lots of boilerplate: a map[<key type>]interface{} is different from any other map type and thus requires explicit iteration on the caller site to convert back and forth. Unfortunately, this makes it hard to completely avoid code/test duplication.

Please let me know what you think.

@k8s-bot
Copy link

k8s-bot commented May 22, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

2 similar comments
@k8s-bot
Copy link

k8s-bot commented May 22, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@k8s-bot
Copy link

k8s-bot commented May 22, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note-label-needed labels May 22, 2016
@@ -2033,9 +2035,39 @@ const (
// Number of Pods that may be running on this Node: see ResourcePods
)

// ResourceNameList is a collection of ResourceName values.
type ResourceNameList []ResourceName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sort functions should go in a separate package. Probably somewhere under a util. That said, since a resource name is just a string can we just cast to a string slice and use sort strings?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate, we tend to keep types.go as just holding the structs and all the utility functions should go elsewhere. It makes it easier to work with the versioned types files if we don't tend to have the utility functions mixed in.

@bgrant0607
Copy link
Member

cc @kubernetes/kubectl

@bgrant0607
Copy link
Member

ok to test

@derekwaynecarr
Copy link
Member

See: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/describe.go#L304

And: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/sorted_resource_name_list.go

Can we localize sort logic to that second file?

And have a utility to does what's required there?

@timoreimann
Copy link
Contributor Author

@derekwaynecarr Thanks for the pointer to those files. Very helpful, I missed to see sorted_resource_name_list.go when I put together the PR. My latest commit moved the sorting logic to that place.

Let me know what you think.

@timoreimann
Copy link
Contributor Author

@derekwaynecarr did you have a chance to look at the PR again? Thanks!

@@ -842,21 +842,25 @@ func describeContainers(label string, containers []api.Container, containerStatu
if len(resourceToQoS) > 0 {
fmt.Fprintf(out, " QoS Tier:\n")
}
for resource, qos := range resourceToQoS {
for _, resource := range SortedQoSResourceNames(resourceToQoS) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should stop reporting qos per resource in 1.3

@bgrant0607 @vishh - now that we have merged #14943 should we update the CLI to just output the QoS of the pod?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. We should do that. I can post a patch for it later today, unless we
want to make that change in this PR itself.

On Wed, May 25, 2016 at 2:43 PM, Derek Carr notifications@github.com
wrote:

In pkg/kubectl/describe.go
#26046 (comment)
:

@@ -842,21 +842,25 @@ func describeContainers(label string, containers []api.Container, containerStatu
if len(resourceToQoS) > 0 {
fmt.Fprintf(out, " QoS Tier:\n")
}

  •   for resource, qos := range resourceToQoS {
    
  •   for _, resource := range SortedQoSResourceNames(resourceToQoS) {
    

I think we should stop reporting qos per resource in 1.3

@bgrant0607 https://github.com/bgrant0607 @vishh
https://github.com/vishh - now that we have merged #14943
#14943 should we update
the CLI to just output the QoS of the pod?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/26046/files/c8a14e376e3df237d6d0d6a2d5266056cbfeab87#r64658693

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should stop reporting qos per resource in 1.3

Does this mean that if I have a deployment and describe it I won't be able to see its QoS anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be wrong here, but my understanding is that QoS just moves one layer up from containers to pods.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Qos has changed this release to be at the pod level and not for individual
compute resources. A follow on PR was made that now reports the effective
QoS for a pod in describe.

On Thursday, May 26, 2016, Michail Kargakis notifications@github.com
wrote:

In pkg/kubectl/describe.go
#26046 (comment)
:

@@ -842,21 +842,25 @@ func describeContainers(label string, containers []api.Container, containerStatu
if len(resourceToQoS) > 0 {
fmt.Fprintf(out, " QoS Tier:\n")
}

  •   for resource, qos := range resourceToQoS {
    
  •   for _, resource := range SortedQoSResourceNames(resourceToQoS) {
    

I think we should stop reporting qos per resource in 1.3

Does this mean that if I have a deployment and describe it I won't be able
to see its QoS anymore?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/26046/files/c8a14e376e3df237d6d0d6a2d5266056cbfeab87#r64710897

@derekwaynecarr
Copy link
Member

@timoreimann - just the one open question that this PR triggered where I think we should stop showing per resource qos information, will let some others weigh in before merging this.

@derekwaynecarr derekwaynecarr added this to the v1.3 milestone May 25, 2016
@derekwaynecarr derekwaynecarr added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-label-needed kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels May 25, 2016
@derekwaynecarr
Copy link
Member

can you squash your commits in the interim?

Includes container and QoS resources.
@timoreimann timoreimann force-pushed the stabilize-map-order-in-kubectl-describe branch from c8a14e3 to 32aa740 Compare May 25, 2016 21:52
@timoreimann
Copy link
Contributor Author

@derekwaynecarr done.

@timoreimann
Copy link
Contributor Author

@derekwaynecarr @vishh I'm open to providing another / extending this PR to move QoS information to the pod level but might lag a bit due to a long weekend ahead. If you're in a hurry to get things in for the upcoming 1.3 release, I'd happily leave it to another PR of yours. Just let me know.

@vishh
Copy link
Contributor

vishh commented May 25, 2016

@timoreimann I will most likely tackle pod level QoS reporting this week. I can open a tracker issue in case you are interested.
The functionality this PR attempts to introduce is orthogonal to moving to pod level QoS. So this PR as-is LGTM!

@timoreimann
Copy link
Contributor Author

@vishh A tracker issue would be great.

Thanks for the feedback!

@timoreimann
Copy link
Contributor Author

First time contributor here. @@derekwaynecarr, do I understand correctly that I still need LGTM and ok-to-merge labels?

@derekwaynecarr
Copy link
Member

This PR looks good. Applying all the required labels. Thanks for the contribution!

@derekwaynecarr derekwaynecarr added lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-merge and removed needs-ok-to-merge labels May 26, 2016
@alex-mohr
Copy link
Contributor

Because submit queue is blocked, I'm going to kick off re-tests of the top 5 PRs in the queue, then merge them if they pass.
@k8s-bot test this please: issue #IGNORE

@alex-mohr
Copy link
Contributor

Apologies for not looking into this particular test failure, using IGNORE bypass as buildcop.
@k8s-bot node test this please: issue #IGNORE

@timoreimann
Copy link
Contributor Author

@alex-mohr my PR still seems to be stuck. is the submit queue still having issues?

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link

k8s-bot commented May 28, 2016

GCE e2e build/test passed for commit 32aa740.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 03fc51f into kubernetes:master May 28, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants