-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenStack Keystone Token Authentication #25391
Conversation
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
1 similar comment
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test". This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry. Otherwise, if this message is too spammy, please complain to ixdy. |
return nil, false, errors.New("Internal error getting the Keystone provider") | ||
} | ||
|
||
client, err := openstack.NewIdentityAdminV3(provider, gophercloud.EndpointOpts{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to handle both v2 and v3, probably we can use openstack/utils/choose_version.go ChooseVersion function call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keystone v2 is deeply deeply depricated. v3 is available as far back as Grizzly. And will finally be removed in Newton. v3 can validate tokens created with the v2 api. They are compatible in that direction. The Keystone devels also suggested the v3 only way as a path to tighten security via a new role/policy tweak. So due to all of this, I dont think we want to support v2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine if v2 token can be validated through v3.
The single domain problem is still exist in this approach for username/password based authentication because apiserver reading the configpath when we ask for authentication where, the only good part I see is - we can switch the domainame for the apiserver to different one by requesting admin runtime by changing in the configfile. Overall code looks decent, time to explore further to see how it works with kubectl client. |
is username unique across domains? if not, don't we have to do one of these?
|
@liggitt: thats one of the reasons I want down the token path in the first place. Usernames are only unique within a domain. So the server side username/password auth can only ever support one domain without some other workaround. V2 is long deprecated so multidomain is pretty normal now. We could add some complicated logic like, It feels like it would be fragile though. If tokens are used, the domain/username/password/project can all be resolved on the client side, and can be specified by the user. So its not restricted to just one domain. It also supports different auth mechanisms like Kerberos or k2k federation. I really think we should just remove username/password auth all together from the server. There are too many issues with it. What do others think? |
But the AuthenticateToken interface has to return a user.Info object, containing a username, that the authorization layer can use to make authz decisions. So, what unambiguous username would this token authenticator return? |
if err != nil { | ||
return nil, false, err | ||
} | ||
return &user.DefaultInfo{Name: obj.Token.User.Id}, response.StatusCode == 200, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean k8s authz policy would have to be written against keystone user uuids?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe not name but the id field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in DefaultInfo, the Name field has to be set, UID is optional. I am wondering whether the keystone user id or user name+domain name should be used as the name within k8s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the natural meaning between name and id is different, I do not like the idea to mix it up while using an id as the name value.
I would like to know why we need to care about multi domain in openstack while we are just introducing an authentication method into k8s? it has some similar meaning compared with authenticating users in both corp and production ldap(:)).
I agree there would be actually multiple domain, but at this condition, k8s should only be deployed onto one of them, not at some place outside of any domain, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to think about it now, before we commit to supporting the feature. If not, its a recipe for a lot of pain in the future. Keystone didn't put in enough forethought into it in the past, and that's why they are desperately trying to get rid of v2 now. Because its just not flexible enough. Its taken them many years to do so.
Almost every new OpenStack cloud these days is multidomain. They use them for service accounts, trusts, etc. One domain thats sql backed for service accounts, one domain for ldap, one domain for heat trusts, one for sahara trusts, magnum trusts, etc.
So it is highly likely that a user may want to do all of the following on a Magnum launched k8s:
- interact with k8s via the kubectl command using their horizon provided openstackrc file for credentials
- interact with k8s via heat (which uses a trust user in a different domain which the user delegates access to his/her behalf)
- interact with k8s via the horizon plugin
All of these may involve different domains, and all have tokens in common in their implementations.
On my cloud, once k2k federation is complete, our setup will most likely be the following:
- One keystone with an kerberos domain as a k2k identity provider
- Two separate regions, each with their own Keystone with sql backed service users and heat domain users. Setup as a k2k service provider. This allows keystone load to be sharded nicely.
The user would login through Horizon via the identity provider, federate to one of the regions, launch a kubernetes cluster via magnum and the cluster would point back to the region's keystone. k2k federation this way is entirely transparent to the user.
Then, to use the cli:
The user would download the openrc file which would include:
domain,project,identity_keystone_url, region name,auth_type_krb5 (note, no username/password)
The openstack client for authentication will:
contact the identity provider keystone, authenticate with it using the ticket granting ticket from the user's krb5 login session to get the saml assertion, and the region specified keystone.
pass the saml assertion to the region keystone and get a token back valid for that region.
The token can then be passed to kubectl to to send to the api-server to send back to the region keystone for validation.
This kind of setup won't work well with username/password at all. But the authentication is seamless for users. The krb5 tgt is fetched automatically when they login to their own machine.
We don't want to support all the ways Keystone works inside the api-server. The keystone token mechanism provides an abstraction so that Keystone can be modified as needed to scale, add features, enhance security, etc without having to change the api all the time.
Doing Keystone username/password inside the server forces a very narrow surface of Keystone that is going to be so painfully restrictive, so I think its better to just skip entirely.
You can leave it as an alternative to Tokens, but I think its going to get a lot of issues raised against it in the same vain as the "keystone v3 isn't supported" issue. "kerberos isn't supported", "k2k isn't supported", etc.
There will need to be a solution for the k8s dashboard, sure, but I don't think username/password on the server is the way to solve that. k8s dashboard is going to need to be aware of many more authentication types Keystone does in the wild or else k8s dashboard won't be able to be widely used in a lot of the enterprise environments OpenStack's supporting. Authentication in k8s dashboard itself will need to be plugable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keystone doesn't quite work that way.
Maybe a short Keystone 101 is in order.
Keystone is at its heart, an authz mechanism, and somewhat an authn mechanism. They have stated that they would like to mostly get out of the authn business too. Its hard. :)
Keystone is about centralizing a bunch of the cloud authz stuff in one place so its much easier to manage for operators, and then authn/authz code doesn't have to be scattered around the various cloud services, making it much easier to write services and secure them. They only need to talk to Keystone.
Keystone provides the following abstractions to assist services with getting their part of authz done in this kind of environment:
Domains - A container where users, groups, and projects are housed. Each is owned by a single domain, and names are unique only within a single domain. Each has a unique identifier that is global. Authentication plugins can be associated with a domain so that the users housed by the domain can have one type of authentication associated with it. Services usually don't have to care about domains.
Project (aka tenant). The base unit of ownership/tenancy in Keystone. OpenStack services that create resources on behalf of Users by request are associated with a Project, not with Users. Users can come and go on a Project but the Project, and its resources stays.
Group. Just a list of users. Group members have to be in the same domain as the Group.
Role. A tag. Unit of authorization.
Users are associated with a Project by a Role or a User to a Group, and a Group to a Project by a Role. Services check to see if an authenticated User has a Role on a Project to determine if the User is authorized to perform some action to resources associated with the Project. Create, delete, list, etc.
In the User/Group <- Role -> Project association, Users/Groups and Projects do not have to be in the same domain. In the case of Trusts, they often aren't.
Trust. An association between one User (Trustor) and another User (Trustee). This allows Roles on Projects to be delegated from a Human User to a Service Account User so that the Service Account can be used to perform actions on behalf of the Human without the Human's interaction or credentials. The trustee user has its own credentials.
Token, an abstract short term bearer token string that, after a User authenticates with Keystone then gives it to a Service, and that gives Services a way to validate the User is already authenticated (is token valid or not) and get authz information about:
- What Project the User is currently acting on behalf of.
- What Roles the user has access to on that Project, that its acting on.
So, in a multitenant supporting Service, the Service usually just checks the Token is valid and checks to see if there is a op sepcified Role on the Project they are authenticated to, and then the Service associates anything created/listed/etc with that Project.
In k8s, Projects COULD be mapped to namespaces, and in that mode, would function just like all other OpenStack services. Very similar to how Nova works. Nova Instances are namespaced at a Project level. Users switch namespaces by switching Project and getting a new scoped token.
Since k8s is not multitenant aware at present, kubernetes as as service (k8saas) is provided as part of Magnum. This allows a Project to create a few VM's, and launch k8s on it. In this type of k8s deployment, it is most likely intended to be used by Users of the Project that owns the VM's its launched on. AKA single tenant mode. In this case, Users should just be restrected to Users that have a specified Role on a specified Project so that only Users of that Project can perform actions. Keystone Authentication should still be used so that a seperate authentication mechanism shouldn't need to be used by the users. The exiting Keystone creds should be sufficient.
Does this explenation help clarify things? This is why its hard to capture all of this stuff in username/password auth too. Tokens do a lot of things.
This is also kind of why the authn and authz code in k8s is kind of blured with respect to Keystone, since Keystone itself does way way more then just authn. Its mostly about authz.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, Users should just be restricted to Users that have a specified Role on a specified Project so that only Users of that Project can perform actions.
If the level of granularity you're looking for is just "can this user use this kubernetes installation" without regard for what specific action they're taking against the kubernetes API, I guess the authenticator could make that decision, and users that didn't meet the requirements wouldn't even be considered users (they'd get a 401 instead of a 403 error).
If you're looking to make authorization decisions based on the kubernetes namespace, or resources, or action being performed, etc, that information is available to the authorizer interface (along with the user.Info
returned by the authenticator). If you wanted to pair a keystone authenticator and authorizer, you could plumb user domain/tenant/role information from the authenticator to the authorizer in the Extra
user attributes (added recently in https://github.com/kubernetes/kubernetes/pull/23574/files#diff-4f723ba6c5ced0845bfaa4ff3a5ad25b).
I'm still curious how you'd map keystone role names to decisions on specific actions, resources, and namespaces in kubernetes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. :)
Thanks for the pointer.
Here's an updated version that tries to pass that info through Extra. I'll try and write an authz plugin that uses it soon.
Keystone using Services usually use something like namespaces to map to Project via a single specified role, like 'Member'
More specific restrictions on actions/specific resources, etc are usually done in the Service itself with a role to action mapping. I don't think we need to implement that now.
If you are interested in how that normally works, see:
http://docs.openstack.org/kilo/config-reference/content/policy-json-file.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A complication is how an authorizer that only knows about certain users from a certain authn source responds to users from a different authn source (like service accounts). I vaguely remember that being discussed at some point in the past, perhaps even regarding a possible keystone authorizer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that may be able to be handled by just adding an additional Extra key that tags the auth as having come from the keystone auth plugin. Then the authz plugin would only work if the flag is in Extra?
That should allow a service account through.
I think the service account thing could be abused in a multi tenant scenario though.
I'm going to work on only the single tenant use case for now, since that's pretty safe. We probably should talk at the sig-auth meeting about what multitenant might look like. It may be much more invasive, and something k8s doesn't want to support at all. But I think would be a great feature.
It returns the user_id, not the username, which is unique. Which is why I had to do the client.Request direct to validate rather then use gopherclient's abstraction since gopherclient has no access to user_id. Thats another bug the username/password authenticator has. it assumes username is unique and should use user_id for the keystone username as well. |
or it could domain-qualify the username... not sure which is more expected or usable within k8s |
|
||
return &KeystoneAuthenticator{authURL}, nil | ||
// NewKeystoneAuthenticator returns a password authenticator that validates credentials using openstack keystone | ||
func NewKeystoneAuthenticator(configPath string) (*KeystoneAuthenticator, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think It is a good chance to change this return value to be one of authenticator.Password
, authenticator.Token
, authenticator.Request
We still have to keep username because I don't expect any other apps on top layer will supply anything other than username/password like dashboard.. In case of username/password auth let's have default domain supplied via appserver and credentials(username/password) via kubectl/dashboard.. |
We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm. |
This PR hasn't been active in 91 days. Closing this PR. Please reopen if you would like to work towards merging this change, if/when the PR is ready for the next round of review. cc @deads2k @kfox1111 @liggitt @smarterclayton You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days |
Hi Guys, Thanks, |
I'd like it to be resurrected at some point. the need isn't going away and I feel the approach is still valid. |
So, consensus is around a webhook implementation for keystone integration. PoC is here - https://github.com/dims/k8s-keystone-auth |
Still would like to see some quick authz prototyped along with it, as authz will drive the design of authn. It sure did in this ps. |
@kfox1111 done. please review |
@dims the poc looks good to me. |
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] #25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] #25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] kubernetes/kubernetes#25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] kubernetes/kubernetes#25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] kubernetes/kubernetes#25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] kubernetes/kubernetes#25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] kubernetes/kubernetes#25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] kubernetes/kubernetes#25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ``` Kubernetes-commit: 59b8fa32f129be29f146bfd4888a5d1ab7e71ca5
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] kubernetes/kubernetes#25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] kubernetes/kubernetes#25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ``` Kubernetes-commit: 59b8fa32f129be29f146bfd4888a5d1ab7e71ca5
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) plugin/pkg/client/auth: add openstack auth provider This is an implementation of auth provider for OpenStack world, just like python-openstackclient, we read the environment variables of a list `OS_*`, and client will cache a token to interact with each components, we can do the same here, the client side can cache a token locally at the first time, and rotate automatically when it expires. This requires an implementation of token authenticator at server side, refer: 1. [made by me] kubernetes/kubernetes#25536, I can carry this on when it is fine to go. 2. [made by @kfox1111] kubernetes/kubernetes#25391 The reason why I want to add this is due to the `client-side` nature, it will be confusing to implement it downstream, we would like to add this support here, and customers can get `kubectl` like they usually do(`brew install kubernetes-cli`), and it will just work. When this is done, we can deprecate the password keystone authenticator as the following reasons: 1. as mentioned at some other places, the `domain` is another parameters which should be provided. 2. in case the user supplies `apikey` and `secrets`, we might want to fill the `UserInfo` with the real name which is not implemented for now. cc @erictune @liggitt ``` add openstack auth provider ``` Kubernetes-commit: 59b8fa32f129be29f146bfd4888a5d1ab7e71ca5
This change modifies the api-server to support validation of
OpenStack Keystone tokens, allowing many different types of
Keystone supported authentication plugins to be used.
This change is