-
Notifications
You must be signed in to change notification settings - Fork 923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubectl doesn't respect --user parameter when execution create clusterrolebinding #188
Comments
If someone wants to look at that, they are very welcome |
I think we may have some misunderstanding here. IIUR, |
I don't think that is true.
I think it should do what @mpashka expects. |
But in |
I have tested, the # k create clusterrolebinding test --user=lw --clusterrole=view
clusterrolebinding "test" created
# k describe clusterrolebinding test
Name: test
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
User lw
# ./kubectl create clusterrolebinding test1 --user=default --clusterroleuser=lw --clusterrole=view
error: auth info "default" does not exist Here I changed this flag name to |
I'm a little worried about the conflict. I don't think this should happen. Your patch is merely hiding the bug, isn't it? That might be a good thing, but it's probably too early to decide to do that. Could you investigate how a global flag can conflict with a much deeper sub-command flag? That looks like a source of confusion. |
Sorry, I don't understand, I mean we have species of global flag and sub-command flag, but when we using the command, we can't specify use a flag as global flag or sub-command flag, if both of them have a same flag, then have some problem.
|
the sub-command wins in this case. Having had that command available for several releases, I don't think we'll change the interface at this point /close |
@liggitt, are you saying that the person needs to change the active user in the kubeconfig to do that? |
yes, or define a context that references the desired user and use --context |
Great, I think that's useful for the record :-) |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
no
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
username
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
What happened:
If <default_user> is configured in kubectl. When applying kubectl command with another user:
kubectl --user='admin' create clusterrolebinding <role_name> --clusterrole=view --group=<group_name>
Kubectl ignores --user parameter. And I get error
Error from server (Forbidden): User "<default_user>" cannot create clusterrolebindings...
What you expected to happen:
This should result in creating appropriate clusterrolebinding on behalf of admin user.
admin user is to be used for api server authentication, not viewer.
How to reproduce it (as minimally and precisely as possible):
Setup kubernetes cluster.
Configure kubectl. Create cluster configuration, context, 2 users, set default context. To reproduce this issue create two users in kubectl config - one user with permissions to create clusterrolebinding (admin) and another without (viewer). Set user without permissions (viewer) as default for context.
Apply kubectl create clusterrolebinding with different user specified. E.g.
kubectl --user='admin' create clusterrolebinding <role_name> --clusterrole=view --group=<group_name>
Instead of successfully created clusterrolebinding I get error message:
Error from server (Forbidden): User "viewer" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope. (post clusterrolebindings.rbac.authorization.k8s.io)
Anything else we need to know?:
There are at least 2 workarounds:
Specify admin user as default in kubectl config
Create file with clusterrolebinding and apply kubectl create -f <file_name>
Kubernetes version (use
kubectl version
):I checked this with kubectl 1.7.11 and 1.9.0
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.11", GitCommit:"b13f2fd682d56eab7a6a2b5a1cab1a3d2c8bdd55", GitTreeState:"clean", BuildDate:"2017-11-25T17:51:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The text was updated successfully, but these errors were encountered: