Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stabilize describer outputs #25251

Closed
3 of 9 tasks
timoreimann opened this issue May 6, 2016 · 43 comments
Closed
3 of 9 tasks

Stabilize describer outputs #25251

timoreimann opened this issue May 6, 2016 · 43 comments
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@timoreimann
Copy link
Contributor

timoreimann commented May 6, 2016

All describers should exhibit a stable output behavior; that is, the output should only change if the state of the underlying object changes. Specifically, we should make sure that all iterations over maps (which Go randomizes) are always carried out in a stable, deterministic fashion.

TODO list (and related object(s)):

  • QoS classes (pod)
  • resource limits (pod)
  • resource requests (pod)
  • labels
  • annotations
  • selectors
  • secrets
  • ConfigMap data
  • Flex volume source options

(Original issue descriptions:

kubectl describe pod <some pod> enumerates container resources for various aspects, namely CPU and memory for QoS tiers, limits, and requests at this moment. When invoking the command multiple times, one can see that the order is not stable: Sometimes, CPU is listed before memory, and other times it's just vice versa. It's not stable across resource types either (i.e., QoS tiers may show CPU first while limits doesn't).

For certain use cases (such as when passing the command to watch -1), a stable order would be helpful to minimize distraction from such "irrelevant" output changes.

Happy to work on a PR if folks agree.)

@roberthbailey roberthbailey added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. team/ux labels May 8, 2016
@roberthbailey
Copy link
Contributor

/cc @jlowdermilk

@bgrant0607
Copy link
Member

This is due to random map order. Agree that stable order would be useful.
cc @kubernetes/kubectl

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. area/kubectl help-wanted labels May 9, 2016
@smarterclayton
Copy link
Contributor

Agreed - all describers should be stable order and we should probably make sure we educate reviewers to look for that.

@bgrant0607
Copy link
Member

Among other things, all maps need to be sorted.

@timoreimann
Copy link
Contributor Author

Unless someone has already started to work on this, I'd like to take a stab.

@sylwekb
Copy link
Contributor

sylwekb commented May 20, 2016

@timoreimann Have you started working on this? If not I would like to take it over

@timoreimann
Copy link
Contributor Author

@Gitfred I did actually. I plan to push out a PR over the weekend.

k8s-github-robot pushed a commit that referenced this issue May 28, 2016
…ectl-describe

Automatic merge from submit-queue

Stabilize map order in kubectl describe

Refs #25251.

Add `SortedResourceNames()` methods to map type aliases in order to achieve stable output order for `kubectl` descriptors.

This affects QoS classes, resource limits, and resource requests.

A few remarks:

1. I couldn't find map usages for described fields other than the ones mentioned above. Then again, I failed to identify those programmatically/systematically. Pointers given, I'd be happy to cover any gaps within this PR or along additional ones.
1. It's somewhat difficult to deterministically test a function that brings reliable ordering to Go maps due to its randomizing nature. None of the possibilities I came up with (rely a "probabilistic testing" against repeatedly created maps, add complexity through additional interfaces) seemed very appealing to me, so I went with testing my `sort.Interface` implementation and the changed logic in `kubectl.describeContainers()`.
1. It's apparently not possible to implement a single function that sorts any map's keys generically in Go without producing lots of boilerplate: a `map[<key type>]interface{}` is different from any other map type and thus requires explicit iteration on the caller site to convert back and forth. Unfortunately, this makes it hard to completely avoid code/test duplication.

Please let me know what you think.
@timoreimann
Copy link
Contributor Author

@roberthbailey @bgrant0607 @smarterclayton @jlowdermilk @derekwaynecarr fixed order output for pod descriptions. Do you know of any other describers that use maps as well? A quick glance on my end didn't yield anything but maybe I missed some.

@bgrant0607
Copy link
Member

@timoreimann Maps in the API are: labels, annotations, resources, selectors, Secret and ConfigMap data, Flex volume source options

@timoreimann
Copy link
Contributor Author

timoreimann commented May 31, 2016

@bgrant0607 thanks. I'll go over the list and see what still needs to be done. Please let me know if you have any preference towards either tracking everything as part of this issue or a new one (that I can create) instead.

@derekwaynecarr
Copy link
Member

I think this one issue is fine to track. The current work handled resources. If you want to put a task list on the issue to track that is fine too.

@timoreimann timoreimann changed the title Stabilize resource order in 'kubectl describe pod' output Stabilize describer outputs Jun 1, 2016
@timoreimann
Copy link
Contributor Author

@bgrant0607 @derekwaynecarr I updated the issue title and description to better reflect the broader scope of the ticket.

Will continue to work on the individual bits and pieces.

@errordeveloper
Copy link
Member

errordeveloper commented Nov 28, 2016

I fixed two bugs that relate to this (#32229, #35112), I think we need a better contract.

I am looking at pkg/kubectl/describe.go and, and I feel like it can be made more elegant with good use of text/template, or maybe be just a few extra structs could work better. It's just a massive set of special cases right now, with formatting characters everywhere (which seem fairly consistent, but you can never tell 100%).

To illustrate, the structs could look something like these:

type Section struct {
     Name string
}

type LineDescriptions struct {
     Tabs int32
     Data map[string]string
}

type Table struct {
     Head []string
     Data [][]string
}

Let's also consider more convertable output formats, I am thinking of markdown or something similar (of course we don't want YAML, we already have that), it'd help a lot for folks who want to work on IDE integrations and other similar things (I've started more specific discussion in #37576).

@smarterclayton
Copy link
Contributor

smarterclayton commented Nov 28, 2016 via email

@adohe-zz
Copy link

@smarterclayton you mean this proposal #29472?

@smarterclayton
Copy link
Contributor

smarterclayton commented Nov 30, 2016 via email

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2017
@timoreimann
Copy link
Contributor Author

Just picked up this issue again.

@smarterclayton is it okay to continue working on the describer output? or is that move to the server side still ongoing?

@smarterclayton
Copy link
Contributor

We’re going to discuss it again the first week in January. Probably best to wait until then

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2019
@timoreimann
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 7, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 5, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 5, 2019
@errordeveloper
Copy link
Member

errordeveloper commented Jul 6, 2019 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 6, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2019
@McCoyAle
Copy link

Hey, @timoreimann @errordeveloper This one has been open for some time. Is there anything that is needed?

@timoreimann
Copy link
Contributor Author

@McCoyAle tbh, I haven't worked on this one so long that I'm not sure whether it's still relevant. Do you possibly know? If not and you happen to have a moment to verify whether output ordering is still an issue, I'd love to hear about it.

@timoreimann
Copy link
Contributor Author

for now:

/remove-lifecycle rotten

@timoreimann
Copy link
Contributor Author

/remove-lifecycle rotten

@timoreimann
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 11, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

openshift-publish-robot pushed a commit to openshift/kubernetes that referenced this issue Jul 30, 2020
Bug 1854312: 4.4: UPSTREAM: 92014: CSI: Modify VolumeAttachment check to use Informer/cache

Origin-commit: 65d455ba14320a3e1270c2d304a0007de3045177
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests