Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss what to do with kubectl flags that take in a single api version #22454

Closed
nikhiljindal opened this issue Mar 3, 2016 · 40 comments
Closed
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@nikhiljindal
Copy link
Contributor

Problem: We have kubectl flags (--api-version and --output-version) that take an API version. These were designed when we had just one group and it was expected that just specifying the version will be enough.
But, we now have multiple groups with potentially multiple versions in each group. We have replaced version by groupVersion at most places, but these kubectl flags remain.

This issue is to discuss the path forward for these kubectl flags. What we should do in 1.2 and what is post-1.2

cc @bgrant0607 @deads2k @smarterclayton @lavalamp @caesarxuchao @janetkuo @kubernetes/kubectl

@nikhiljindal
Copy link
Contributor Author

Some options:

  • Update the existing flags to take a list of group versions
  • Deprecate and remove the existing flags and introduce new ones.

@nikhiljindal nikhiljindal added area/kubectl team/ux sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Mar 3, 2016
@nikhiljindal nikhiljindal added this to the v1.2 milestone Mar 3, 2016
@bgrant0607
Copy link
Member

Related to #14318

@deads2k
Copy link
Contributor

deads2k commented Mar 3, 2016

I could see allowing multiple --api-version arguments in the format "group/version", where it means "if you're requesting something from this group, use this version". Since we disambiguate resource groups based on kubectl get resource.group, this allows easy, orthogonal use. It would also allow clean merging of preferences from a kubeconfig file.

I see --output-version as a congruent flag, so I'd expect the same behavior.

@smarterclayton
Copy link
Contributor

smarterclayton commented Mar 3, 2016 via email

@nikhiljindal
Copy link
Contributor Author

Do you still want --api-version if we allow users to specify the version with resource? Like kubectl get deployments.extensions.v1beta1?

@deads2k
Copy link
Contributor

deads2k commented Mar 3, 2016

Do you still want --api-version if we allow users to specify the version with resource? Like kubectl get deployments.extensions.v1beta1

I think so. Remember that we also have automatic disambiguation of resources based on preferences. I can specify kubectl get deployments,something-else-in-extensions --api-version=extensions/v1beta1. I think these have value as orthogonal preferences.

@nikhiljindal
Copy link
Contributor Author

Why have 2 ways to do the same thing?

Why not write the same command as kubectl get deployments.extensions.v1beta1,something-else-in-extensions.extensions.v1beta1?

@aveshagarwal
Copy link
Member

From usability point of view, it would be better to have kubectl get resource1,resources2.. --api-version=group1/version1... when requesting multiple resources in the same group/version pairs.

@nikhiljindal
Copy link
Contributor Author

Based on discussion with @lavalamp, here is what we can do:

  • 1.2: Stop using apiversion field from kubeconfig. Users should specify the apiversion in the command, if they want, so that the command produces the same output when run on any machine or by any user. Deprecate apiversion field in kubeconfig file #14318
  • 1.2: Deprecate --api-versions field from kubectl. The flag doesnt work today anyway.
  • --output-version kubectl flag: Check if it works. We can maybe let it be as is, if it works. If not, we can ask users to use kubectl convert
  • Check if kubectl convert works. Now that we have resources (such as hpa) in multiple groups, we should ensure that it works.
  • 1.3: Allow users to specify the group version with the resource in kubectl. For ex: kubectl get deployments.extensions.v1beta1. Users can still run kubectl get deployments to get deployments in the latest version as per master.
    • Enhancement: Detect if the command is being run from a script and if yes, then do not allow kubectl get deployments. User will have to specify the group version in scripts, to prevent surprise script breakages.

@deads2k
Copy link
Contributor

deads2k commented Mar 3, 2016

Even the end goal for 1.3 still makes this painful for a user. The vast majority of the time users won't have to specify a group. The explicit group disambiguation should be the exception, but being able to specify a preferred version for an API group is also pretty normal to do, especially during transitions. The "normalness" of specifying back when we had three versions was how we ended up with it.

Keeping separate settings in a kubeconfig (or other preference file) allows default disambiguation to do its 90%-case job and allows --api-version (kubeconfig overridden by command line) to do its job as well.

@nikhiljindal
Copy link
Contributor Author

Re checking output-version and convert: It works but does not error if the output version value is inapplicable.

I created an hpa using kubectl.sh create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml is in extensions/v1beta1.

Running kubectl get hpa -o yaml prints the output in autoscaling/v1.
Running kubectl get hpa -o yaml --output-version=extensions/v1beta1 prints the output in extensions/v1beta1.

Running kubectl get pods,hpa -o yaml --output-version=extensions/v1beta1 prints the pods in v1 and hpa in extensions/v1beta1.
Running kubectl get pods,hpa -o yaml --output-version=invalid prints the pods in v1 and hpa in autoscaling/v1.

Running kubectl.sh convert -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml prints the output in extensions/v1beta1
Running kubectl.sh convert -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml --output-version=autoscaling/v1 prints the output in autoscaling/v1
Running kubectl.sh convert -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml --output-version=invalid prints the output in extensions/v1beta1

So the default for kubectl get hpa seems to be autoscaling/v1 but default for kubectl convert hpa seems to be extensions/v1beta1 which needs fixing, but otherwise output-version flag seems to be working fine. Filed #22484 for that

@lavalamp
Copy link
Member

lavalamp commented Mar 4, 2016

kubectl get deployments.extensions.v1beta1

Hopefully it's deployments.v1beta1.extensions?

@deads2k I make two strange claims.

Claim 1: maintaining a list of group-version pairs is actually a worse experience than not.

First, I note that all of your config files have a group/version embedded in them. So for any -f blah.yaml operation, you don't need to deduce anything. That leaves get/list operations. I maintain that probably, you want the version that's default in the server 80%+ of the time. For the rest of the time, you should really know what you're doing and I think asking you to explicitly write the version e.g. deploymentes.v1beta1 or deployments.extensions is really not that onerous.

Finally, I believe any script you have to drive kubectl must use fully qualified resource names. Otherwise, it will behave subtly differently depending on whose computer it is run from (in your world with custom lists) or differently depending on the server version/flags (in my world).

If you accept that only get/list requests actually need the override, and only from the command line, < 20% of the time, then it seems like keeping a list is a ton of complexity and engineering time consumer that just doesn't add value. In fact I think it's negative value, as users have a level of indirection they have to understand when the system does something they don't expect.

Claim 2: output-version should never need to be different than the version we talk to the server.

If you need a file in a different format, piping it through kubectl convert seems like a fine solution. --output-version is super confusing when you can do things that act on multiple resources.

@deads2k
Copy link
Contributor

deads2k commented Mar 4, 2016

I think I could buy claim 1, but I'd like to have a working implementation go in for the same release where we disable the flag entirely and I think we might want to have different preferred orderings for clients and servers (right now the preferred ordering is static regardless of server or client). I left the option open and made sure that when used in combination with a discovery doc, disambiguation was possible, but I did not plumb it through.

I don't think I agree with claim 2. When writing scripts, it is extremely common to use a template to control output. In order for the template to be predictable, you have to pin the output-version. I don't think that always requiring a pipe through to conversion enhances readability in those cases.

@liggitt
Copy link
Member

liggitt commented Mar 4, 2016

Claim 2: output-version should never need to be different than the version we talk to the server.

Agree with @deads2k, I always set --output-version when writing a template or jsonpath script

@lavalamp
Copy link
Member

lavalamp commented Mar 4, 2016

@deads2k & @liggitt I already claimed we should make scripts explicitly ask for e.g. deployments.v1beta1.extensions. In my world that also pins the output version, so I don't see why you still need output version.

@liggitt
Copy link
Member

liggitt commented Mar 4, 2016

thing is, I don't really care what it speaks to the server, I just want it to output the version I'm writing my template to

@lavalamp
Copy link
Member

lavalamp commented Mar 4, 2016

That's great, then you can use the explicit resource version to do that.

@lavalamp
Copy link
Member

lavalamp commented Mar 4, 2016

I think lists of group/version pairs are SUPER CONFUSING. I don't care if we make admins understand them when they set up apiserver. But I'm going to fight as hard as I can to keep us from making every kubectl user from having to understand them.

@lavalamp lavalamp added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Mar 4, 2016
@deads2k
Copy link
Contributor

deads2k commented Mar 4, 2016

That's great, then you can use the explicit resource version to do that.

I see the version that you use to communicate with server as distinct from the output format you choose, including your template. Why would I tie those two things together? Is it any less confusing to try to understand which flags I pass to kubectl convert in an attempt to convert items in a list? Seems like its the same flags either way, we're just choosing how much friction to have.

@lavalamp
Copy link
Member

lavalamp commented Mar 4, 2016

I maintain that kubectl should not do any conversion. I'd like to move conversion to be server side, actually.

kubectl convert would be less confusing, because it would only operate on one object at a time, so there's not a need for a list of group/version pairs.

@bgrant0607
Copy link
Member

Ref #3955

@deads2k
Copy link
Contributor

deads2k commented Mar 7, 2016

kubectl convert would be less confusing, because it would only operate on one object at a time, so there's not a need for a list of group/version pairs.

It seems like any list and certainly any object that contains another object would cause problems. How would I handle kubectl get deployments -o yaml | kubectl convert? It includes a PodTemplateSpec from the legacy kube API. Is that just something I'm not allowed to do?

@lavalamp
Copy link
Member

lavalamp commented Mar 7, 2016

What's the equivalent representation in the version you're converting to?
It should convert to that.

On Mon, Mar 7, 2016 at 5:10 AM, David Eads notifications@github.com wrote:

kubectl convert would be less confusing, because it would only operate on
one object at a time, so there's not a need for a list of group/version
pairs.

It seems like any list and certainly any object that contains another
object would cause problems. How would I handle kubectl get deployments
-o yaml | kubectl convert? It includes a PodTemplateSpec from the legacy
kube API. Is that just something I'm not allowed to do?


Reply to this email directly or view it on GitHub
#22454 (comment)
.

@deads2k
Copy link
Contributor

deads2k commented Mar 7, 2016

What's the equivalent representation in the version you're converting to?
It should convert to that.

Ok, true. I'd still like to keep the flag for the single-version case so I can write stable templates (that seems to still work now) and I'd like to see a replacement for --api-version before killing it.

@lavalamp
Copy link
Member

lavalamp commented Mar 7, 2016

Let's discuss this in the next API machinery SIG.

On Mon, Mar 7, 2016 at 11:32 AM, David Eads notifications@github.com
wrote:

What's the equivalent representation in the version you're converting to?
It should convert to that.

Ok, true. I'd still like to keep the flag for the single-version case so I
can write stable templates (that seems to still work now) and I'd like to
see a replacement for --api-version before killing it.


Reply to this email directly or view it on GitHub
#22454 (comment)
.

@bgrant0607
Copy link
Member

If this is going to be addressed in 1.2, it needs to be addressed in the next 24 hours.

@bgrant0607
Copy link
Member

I'm punting this out of 1.2, but feel free to argue with me (though do it quickly).

@bgrant0607 bgrant0607 modified the milestones: next-candidate, v1.2 Mar 10, 2016
@lavalamp
Copy link
Member

I don't think we can solve this completely for 1.2. Can we at least deprecate the --api-version flag?

@deads2k
Copy link
Contributor

deads2k commented Mar 10, 2016

I don't think we can solve this completely for 1.2. Can we at least deprecate the --api-version flag?

How much time is left to see what it takes to support resource.version.group? It can be done as along we're greedy on the version and not the group. I'd like to have some way to disambiguate versions for 1.2.

@lavalamp
Copy link
Member

@deads2k Not much time. I thought it already worked ;-(

@deads2k
Copy link
Contributor

deads2k commented Mar 10, 2016

@deads2k Not much time. I thought it already worked ;-(

It's possible to do on top, but not done. I'll see what I can put together.

@liggitt
Copy link
Member

liggitt commented Mar 10, 2016

@deads2k Not much time. I thought it already worked ;-(

resource.group works. resource.version.group doesn't. Supporting both of those formats seems prone to ambiguity.

@lavalamp
Copy link
Member

As long as no one makes a group called "v1" or "v1beta1" it should be fine,
no?

On Thu, Mar 10, 2016 at 1:42 PM, Jordan Liggitt notifications@github.com
wrote:

@deads2k https://github.com/deads2k Not much time. I thought it already
worked ;-(

resource.group works. resource.version.group doesn't. Supporting both of
those formats seems prone to ambiguity.


Reply to this email directly or view it on GitHub
#22454 (comment)
.

@deads2k
Copy link
Contributor

deads2k commented Mar 11, 2016

resource.group works. resource.version.group doesn't. Supporting both of those formats seems prone to ambiguity.

You mean losing autonegotiation in the case where someone has "group/v1" and "group.v1/v1"? I'd say it make the situation painful, but not impossible and cluster-admins would really have to hate users to allow it.

Painful, but not impossible since greedy on version will prefer "group/v1" for "resource.v1.group", but if you look up the version (or know it), you can ask for "resource.v1.v1.group"

I don't see the pain around that (very rare) edge as sufficient to avoid this.

@deads2k
Copy link
Contributor

deads2k commented Mar 11, 2016

opened #22853

@nikhiljindal
Copy link
Contributor Author

Thanks @deads2k!
I assume that now we can get #22410 in as well?
Can I add it to 1.2 milestone and cherrypick-candidate?

@lavalamp
Copy link
Member

group.v1/v1

If we're really worried about it, we can reject group names based on a regexp.

@nikhiljindal Yes. We have til 11...

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Freeze the issue for 90d with /lifecycle frozen.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 15, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 14, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

9 participants