Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Establish guidelines for where commands should go #35226

Closed
smarterclayton opened this issue Oct 20, 2016 · 22 comments
Closed

Proposal: Establish guidelines for where commands should go #35226

smarterclayton opened this issue Oct 20, 2016 · 22 comments
Assignees
Labels
area/code-organization Issues or PRs related to kubernetes code organization area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@smarterclayton
Copy link
Contributor

smarterclayton commented Oct 20, 2016

There's been some recent discussion prompted by the addition of kubeadm and how kubectl will evolve w.r.t. which commands belong where.

Questions to answer:

  1. What commands and features should we put under kubectl? Under kubeadm?
  2. Federation wants to add commands to register a new cluster and create a new cluster on top of an existing cluster - should they do that in kubectl, kubeadm, or separate?
  3. For components that are not part of the core project, should they be added directly into the existing commands or be expected to use an extension mechanism (as proposed elsewhere)?
  4. Is kubeadm part of cluster lifecycle, or is it a general "Kubernetes administration command"?
  5. When should we add new CLI commands (should we add kubefed)?

Background

Today, many users of kubectl are single cluster admins (one or more devops people who use the whole cluster themselves). However, an increasingly large (and in the future, likely dominant) portion of users will be app-admins using a cluster hosted by someone else. eBay, Samsung, OpenShift customers, etc are all likely examples where a central ops team owns the cluster and then gives access to namespaces and subsets of the cluster for individual teams. The user spectrum of kubectl will range from very sophisticated cluster administrators all the way to individual developers making changes on their cluster who need to see the status of some resources.

Points on the user spectrum:

  • Cluster administrators and users with control over an entire cluster - admin and deploy apps
  • Sophisticated users with delegated access to portions of the cluster who may have access to many clusters
  • Unsophisticated cluster-consumers with limited access to non-namespaced resources

Over time, we will continue to add administrative level functionality to Kube. Some of that functionality has overlap with actions users take (for instance, kubectl get and kubectl apply are used by all classes of user), but some of that is not relevant for unprivileged users, such as drain, taint, cordon, etc.

Deciding which commands go where

Some goals we have had for kubectl:

  • Keep the list of commands small and focused on core use cases
    • Keeping the command list small helps ease new unsophisticated users into the tool
    • We can also hide / nest / deemphasize certain commands
  • Help new users find the important commands first
    • Commands that are not commonly used should not be given equal visual priority and weight as uncommon commands - it's ok if an admin has to type out a few more letters
  • Help users understand where to find common function (group like function with like).

We have not yet clarified our goals for kubeadm, but today it is focused on people deploying clusters and adding and removing new components. This heavily overlaps with the cluster-admin / single-cluster owner user, but does not overlap with the tenant-user case much.

Some useful experience from OpenShift (which has oc which wraps kubectl and oadm which contains admin stuff, and oc adm which embeds oadm under oc)

  1. In OpenShift, we have 20-30 additional administrative level commands that are part of our oadm command that handles: config generation, certificate signing, policy (specifically cluster level policy), installation, management level tasks like resource migration, and other actions that only cluster admins will have. I believe over time Kube will have all of these, so it's important to clarify that.
  2. Once we turn on policy for clusters, typically the actions you can perform are broken down by role - those roles in turn form natural groupings of what you can do (things cluster admins can do, things namespace owners can do, things regular users can do) that can benefit from a cluster setup
  3. We will want to create other commands around kubectl that perform more specific workflows - things like the Deis command line and elsewhere.

Design

We should clarify where we add commands and what belongs where. Some possible options and rules:

  1. kubectl contains everything kubeadm is only for installation
  2. kubectl is for generic cluster interaction (app admin level) and for tenant style applications, kubeadm is for administrative actions on a cluster (install + configuration)
  3. kubeadm is for installation and management of clusters, kubectl adm is for administrative actions, and kubectl is for tenant style applications

Further clarification for federation:

  1. Should federation be able to claim the top level kubectl register action, or should it be kubectl federation register or kubeadm join-federation?
  2. Should federation should be treated as an extension (because you can add it on top and it doesn't need to be compiled into Kube) and thus we should implement the extension proposal (Add a proposal for kubectl extension #30086)?
@smarterclayton smarterclayton added the kind/design Categorizes issue or PR as related to design. label Oct 20, 2016
@smarterclayton
Copy link
Contributor Author

@kubernetes/kubectl @kubernetes/sig-cli @kubernetes/sig-cluster-lifecycle @kubernetes/sig-cluster-federation @bgrant0607

Spawned by #34484 and previous discussions about where commands should go and who is impacted.

I'd like to get opinions here quickly so we can unblock #34484. Since this crosses several sigs we all need to agree so as to be aligned.

@smarterclayton smarterclayton added this to the v1.5 milestone Oct 20, 2016
@smarterclayton smarterclayton added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 20, 2016
@bgrant0607
Copy link
Member

Considerations:

  • Coherence for target users and command discoverability
  • Used interactively, programmatically (including by the system itself), or both
  • Universality (does it work for all K8s clusters)
  • Contributor workflow (do the same sets of contributors work on the commands)

I want to see an ecosystem of tools develop, so I think that fostering a culture of monolithic tools is the wrong direction. We already have kubectl, kompose, helm, kubeadm, minikube, kops, oc, and oadm (for openshift). One could argue that some of those should be merged or killed, but I'd like to see 50 useful tools rather than just 1 or 2, most of which developed outside our github orgs.

kubeadm is misnamed, IMO, and doesn't belong in the main repo. I'd call it kubeboot or something similar and move it to another repo. It is intended to run on the nodes, invoked interactively and programmatically, in order to bootstrap Kubernetes. I see it as a single-purpose tool, which we want to be thin, for both resource-consumption and security reasons, and to get thinner over time.

kops, on the other hand, is a cluster provisioning and management tool, intended to be run on a client host. It won't ever support all clusters. Ideally it would leverage kubeadm programmatically, but has distinct functionality. I'd rename it to kubesomething (kubecpm?). I could imagine unifying it with minikube at some point.

To me, it makes more sense to add federation to a tool like kops than to kubeadm. Our current challenge is that both of those tools are pretty new and neither has a clear future.

I'd prefer that for entirely new, especially optional areas such as federation start with their own separate tools, and then we can figure out whether/how to merge them with existing tools down the road.

As for admin-oriented commands interacting with K8s APIs, such as drain and cordon, I could get behind a command group for those. The command surface is getting large enough that I agree we need to subdivide it in order to improve discoverability. (Commands that could apply to many resource types will continue to belong outside command groups, however.) I'd like to see the implementation of these commands moved out of the main repo. The extension proposal (#30086) isn't strictly necessary to do that, but it or some variant of it might help improve velocity in that area.

@jbeda
Copy link
Contributor

jbeda commented Oct 20, 2016

I disagree with the characterization of kubeadm. The goal is to grow this in to a more general lifecycle tool and break out the various steps as building blocks to other provisioning tools. While right now kubeadm does the full meal deal, the eventual goal is to break out the stages so they can be driven/overridden as necessary.

For example, we have new support coming online for TLS bootstraping. That needs to be broken out as a separable command as it can be used in contexts beyond our "happy path" bootstrap.

As for what belongs in the main repo -- what is the criteria there? Should we move kubectl out of the main repo? Beyond history, I see little reason to have it there and not have kubeadm.

cc @kubernetes/sig-cluster-lifecycle

@bgrant0607
Copy link
Member

@jbeda Yes, kubectl should eventually be moved out of the main repo. #24343 #2742 However, moving things is harder than starting new things in new repos, due to entangled dependencies, pending PRs, issue history (700 open issues labeled component/kubectl), etc.

@bgrant0607
Copy link
Member

It looks like this issue also came up in #30237.

@fabianofranz
Copy link
Contributor

Note that depending on how the design decision goes, we may end up with situations where more that one top-level command offers the same subcommand, unchanged. For example, both kubectl get and kubeadm get, or kubectl config and kubeadm config.

In that case, and having in mind that we eventually want to move kubectl out of the main repo, where do the actual command implementations (currently pkg/kubectl/cmd) belong? Should they also have their own (not kubectl, not kubeadm) repo, like kubernetes/commands or something similar?

@fabianofranz
Copy link
Contributor

This sounds like a good discussion topic for the @kubernetes/sig-cli meeting next wednesday. @smarterclayton would you like to bring this up, in which case I'd add it to the agenda?

@bgrant0607
Copy link
Member

We also want to make most of kubectl a reusable client library #7311 (in addition to moving commonly needed complex orchestration, such as cascading deletion, out of the client #12143).

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Oct 25, 2016 via email

@smarterclayton
Copy link
Contributor Author

sig-cli agenda item sounds good.

On Mon, Oct 24, 2016 at 4:40 PM, Brian Grant notifications@github.com
wrote:

We also want to make most of kubectl a reusable client library #7311
#7311 (in addition to
moving commonly needed complex orchestration, such as cascading deletion,
out of the client #12143
#12143).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#35226 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p5YjpkEDB4xATaTsBLr9FDV0PLDOks5q3Re7gaJpZM4KchBn
.

@bgrant0607
Copy link
Member

And API machinery will be moved out of the main repo #2742, so that anything using it for configuration purposes should be able to use it.

@bgrant0607
Copy link
Member

@fabianofranz @smarterclayton FYI the next SIG CLI meeting is scheduled during kubecon.

@bgrant0607
Copy link
Member

The issue discussing a monolithic binary is somewhat relevant: #16508

@dims
Copy link
Member

dims commented Nov 16, 2016

@bgrant0607 @smarterclayton This needs to be triaged as a release-blocker or not for 1.5

@smarterclayton smarterclayton modified the milestones: v1.6, v1.5 Nov 16, 2016
@smarterclayton
Copy link
Contributor Author

Not.

@pwittrock pwittrock modified the milestones: v1.7, v1.6 Mar 6, 2017
@pwittrock pwittrock added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed team/ux (deprecated - do not use) labels Mar 6, 2017
@pwittrock pwittrock removed this from the v1.7 milestone Jun 2, 2017
@bgrant0607 bgrant0607 added the sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. label Aug 21, 2017
@shiywang
Copy link
Contributor

shiywang commented Sep 6, 2017

/sub

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2018
@errordeveloper
Copy link
Member

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 22, 2018
@dims
Copy link
Member

dims commented Jul 12, 2019

/area code-organization

@k8s-ci-robot k8s-ci-robot added the area/code-organization Issues or PRs related to kubernetes code organization label Jul 12, 2019
@MadhavJivrajani
Copy link
Contributor

/remove-kind design
/kind feature

kind/design will soon be removed from k/k in favor of kind/feature. Relevant discussion can be found here: kubernetes/community#5641

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/design Categorizes issue or PR as related to design. labels Jun 29, 2021
@helayoty helayoty added this to SIG CLI Oct 2, 2023
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG CLI Oct 2, 2023
@dims
Copy link
Member

dims commented May 11, 2024

/close

@k8s-ci-robot
Copy link
Contributor

@dims: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@github-project-automation github-project-automation bot moved this from Needs Triage to Closed in SIG CLI May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/code-organization Issues or PRs related to kubernetes code organization area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Archived in project
Development

No branches or pull requests